Mar 14 00:13:14.105483 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:13:14.105628 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:13:14.105645 kernel: BIOS-provided physical RAM map: Mar 14 00:13:14.105715 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 14 00:13:14.105775 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 14 00:13:14.105785 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 14 00:13:14.105795 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 14 00:13:14.105805 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 14 00:13:14.105814 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 14 00:13:14.105827 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 14 00:13:14.105836 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 14 00:13:14.105845 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 14 00:13:14.105907 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 14 00:13:14.105917 kernel: NX (Execute Disable) protection: active Mar 14 00:13:14.105928 kernel: APIC: Static calls initialized Mar 14 00:13:14.105992 kernel: SMBIOS 2.8 present. Mar 14 00:13:14.106003 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 14 00:13:14.106013 kernel: Hypervisor detected: KVM Mar 14 00:13:14.106022 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:13:14.106031 kernel: kvm-clock: using sched offset of 18133103968 cycles Mar 14 00:13:14.106090 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:13:14.106100 kernel: tsc: Detected 2445.426 MHz processor Mar 14 00:13:14.106109 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:13:14.106119 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:13:14.106133 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 14 00:13:14.106143 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 14 00:13:14.106152 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:13:14.106162 kernel: Using GB pages for direct mapping Mar 14 00:13:14.106171 kernel: ACPI: Early table checksum verification disabled Mar 14 00:13:14.106181 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 14 00:13:14.106190 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:14.106200 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:14.106209 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:14.106222 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 14 00:13:14.106231 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:14.106241 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:14.106251 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:14.106261 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:13:14.106270 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 14 00:13:14.106280 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 14 00:13:14.106296 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 14 00:13:14.106309 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 14 00:13:14.106320 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 14 00:13:14.106330 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 14 00:13:14.106340 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 14 00:13:14.106351 kernel: No NUMA configuration found Mar 14 00:13:14.106361 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 14 00:13:14.106375 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 14 00:13:14.106385 kernel: Zone ranges: Mar 14 00:13:14.106395 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:13:14.106405 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 14 00:13:14.106415 kernel: Normal empty Mar 14 00:13:14.106425 kernel: Movable zone start for each node Mar 14 00:13:14.106436 kernel: Early memory node ranges Mar 14 00:13:14.106448 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 14 00:13:14.106457 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 14 00:13:14.106466 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 14 00:13:14.106480 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:13:14.106612 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 14 00:13:14.106626 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 14 00:13:14.106636 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 14 00:13:14.106645 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:13:14.109190 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 14 00:13:14.109209 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 14 00:13:14.109278 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:13:14.109293 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:13:14.109311 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:13:14.109321 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:13:14.109331 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:13:14.109342 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 14 00:13:14.109408 kernel: TSC deadline timer available Mar 14 00:13:14.109420 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 14 00:13:14.109430 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 14 00:13:14.109440 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 14 00:13:14.109623 kernel: kvm-guest: setup PV sched yield Mar 14 00:13:14.109646 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 14 00:13:14.109728 kernel: Booting paravirtualized kernel on KVM Mar 14 00:13:14.109741 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:13:14.109754 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 14 00:13:14.109766 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 14 00:13:14.109832 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 14 00:13:14.110144 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 14 00:13:14.110158 kernel: kvm-guest: PV spinlocks enabled Mar 14 00:13:14.110170 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 00:13:14.110255 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:13:14.110266 kernel: random: crng init done Mar 14 00:13:14.110277 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:13:14.110288 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:13:14.110299 kernel: Fallback order for Node 0: 0 Mar 14 00:13:14.110310 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 14 00:13:14.110322 kernel: Policy zone: DMA32 Mar 14 00:13:14.110333 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:13:14.110352 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 14 00:13:14.110364 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 14 00:13:14.110376 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:13:14.110387 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:13:14.110399 kernel: Dynamic Preempt: voluntary Mar 14 00:13:14.110411 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:13:14.110425 kernel: rcu: RCU event tracing is enabled. Mar 14 00:13:14.110438 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 14 00:13:14.110452 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:13:14.110469 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:13:14.110483 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:13:14.110603 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:13:14.110617 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 14 00:13:14.110755 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 14 00:13:14.110772 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:13:14.110783 kernel: Console: colour VGA+ 80x25 Mar 14 00:13:14.110793 kernel: printk: console [ttyS0] enabled Mar 14 00:13:14.110804 kernel: ACPI: Core revision 20230628 Mar 14 00:13:14.110825 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 14 00:13:14.110837 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:13:14.110848 kernel: x2apic enabled Mar 14 00:13:14.110859 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:13:14.110870 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 14 00:13:14.110883 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 14 00:13:14.110895 kernel: kvm-guest: setup PV IPIs Mar 14 00:13:14.110908 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 14 00:13:14.111007 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 14 00:13:14.111025 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 14 00:13:14.111037 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 14 00:13:14.111049 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 14 00:13:14.111070 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 14 00:13:14.111083 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:13:14.111096 kernel: Spectre V2 : Mitigation: Retpolines Mar 14 00:13:14.111110 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 14 00:13:14.111129 kernel: Speculative Store Bypass: Vulnerable Mar 14 00:13:14.111141 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 14 00:13:14.111218 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 14 00:13:14.111235 kernel: active return thunk: srso_alias_return_thunk Mar 14 00:13:14.111248 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 14 00:13:14.111261 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 14 00:13:14.111274 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:13:14.111287 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:13:14.111301 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:13:14.111322 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:13:14.111334 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:13:14.111348 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 14 00:13:14.111361 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:13:14.111374 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:13:14.111387 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:13:14.111400 kernel: landlock: Up and running. Mar 14 00:13:14.111412 kernel: SELinux: Initializing. Mar 14 00:13:14.111425 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:13:14.111444 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:13:14.111452 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 14 00:13:14.111459 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:13:14.111466 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:13:14.111473 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:13:14.111480 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 14 00:13:14.111486 kernel: signal: max sigframe size: 1776 Mar 14 00:13:14.111615 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:13:14.111628 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:13:14.111635 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 14 00:13:14.111642 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:13:14.111649 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:13:14.111718 kernel: .... node #0, CPUs: #1 #2 #3 Mar 14 00:13:14.111731 kernel: smp: Brought up 1 node, 4 CPUs Mar 14 00:13:14.111744 kernel: smpboot: Max logical packages: 1 Mar 14 00:13:14.111756 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 14 00:13:14.111764 kernel: devtmpfs: initialized Mar 14 00:13:14.111771 kernel: x86/mm: Memory block size: 128MB Mar 14 00:13:14.111783 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:13:14.111790 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 14 00:13:14.111797 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:13:14.111804 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:13:14.111810 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:13:14.111817 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:13:14.111823 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:13:14.111830 kernel: audit: type=2000 audit(1773447182.537:1): state=initialized audit_enabled=0 res=1 Mar 14 00:13:14.111837 kernel: cpuidle: using governor menu Mar 14 00:13:14.111847 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:13:14.111854 kernel: dca service started, version 1.12.1 Mar 14 00:13:14.111860 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 14 00:13:14.111868 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 14 00:13:14.111874 kernel: PCI: Using configuration type 1 for base access Mar 14 00:13:14.111882 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:13:14.111888 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:13:14.111895 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:13:14.111905 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:13:14.111911 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:13:14.111918 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:13:14.111925 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:13:14.111931 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:13:14.111938 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:13:14.111945 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:13:14.111952 kernel: ACPI: Interpreter enabled Mar 14 00:13:14.111959 kernel: ACPI: PM: (supports S0 S3 S5) Mar 14 00:13:14.111965 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:13:14.111975 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:13:14.111982 kernel: PCI: Using E820 reservations for host bridge windows Mar 14 00:13:14.111988 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 14 00:13:14.111995 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:13:14.112849 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:13:14.113088 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 14 00:13:14.113292 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 14 00:13:14.113316 kernel: PCI host bridge to bus 0000:00 Mar 14 00:13:14.113896 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:13:14.114104 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:13:14.114299 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:13:14.114604 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 14 00:13:14.114875 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 14 00:13:14.115071 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 14 00:13:14.115263 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:13:14.115912 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 14 00:13:14.116342 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 14 00:13:14.116765 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 14 00:13:14.116988 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 14 00:13:14.117209 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 14 00:13:14.117444 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 14 00:13:14.117985 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 14 00:13:14.118215 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 14 00:13:14.118449 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 14 00:13:14.119025 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 14 00:13:14.119830 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 14 00:13:14.120061 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 14 00:13:14.120284 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 14 00:13:14.120727 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 14 00:13:14.121166 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 14 00:13:14.121384 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 14 00:13:14.121824 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 14 00:13:14.122051 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 14 00:13:14.122275 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 14 00:13:14.122789 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 14 00:13:14.123169 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 14 00:13:14.123851 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 14 00:13:14.124093 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 14 00:13:14.124311 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 14 00:13:14.124998 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 14 00:13:14.126081 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 14 00:13:14.126110 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:13:14.126122 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:13:14.126133 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:13:14.126143 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:13:14.126154 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 14 00:13:14.126164 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 14 00:13:14.126174 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 14 00:13:14.126185 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 14 00:13:14.126199 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 14 00:13:14.126209 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 14 00:13:14.126220 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 14 00:13:14.126231 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 14 00:13:14.126244 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 14 00:13:14.126256 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 14 00:13:14.126265 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 14 00:13:14.126278 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 14 00:13:14.126288 kernel: iommu: Default domain type: Translated Mar 14 00:13:14.126302 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:13:14.126312 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:13:14.126321 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:13:14.126331 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 14 00:13:14.126343 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 14 00:13:14.127101 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 14 00:13:14.127296 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 14 00:13:14.127476 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 14 00:13:14.127873 kernel: vgaarb: loaded Mar 14 00:13:14.127893 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 14 00:13:14.127903 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 14 00:13:14.127913 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:13:14.127924 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:13:14.127937 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:13:14.127949 kernel: pnp: PnP ACPI init Mar 14 00:13:14.132891 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 14 00:13:14.132912 kernel: pnp: PnP ACPI: found 6 devices Mar 14 00:13:14.132933 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:13:14.132945 kernel: NET: Registered PF_INET protocol family Mar 14 00:13:14.132956 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:13:14.132965 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:13:14.132974 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:13:14.132984 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:13:14.132994 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:13:14.133003 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:13:14.133013 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:13:14.133026 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:13:14.133036 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:13:14.133045 kernel: NET: Registered PF_XDP protocol family Mar 14 00:13:14.133206 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:13:14.133359 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:13:14.147365 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:13:14.150462 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 14 00:13:14.151860 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 14 00:13:14.152068 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 14 00:13:14.152088 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:13:14.152102 kernel: Initialise system trusted keyrings Mar 14 00:13:14.152114 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:13:14.152124 kernel: Key type asymmetric registered Mar 14 00:13:14.152135 kernel: Asymmetric key parser 'x509' registered Mar 14 00:13:14.152147 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:13:14.152159 kernel: io scheduler mq-deadline registered Mar 14 00:13:14.152172 kernel: io scheduler kyber registered Mar 14 00:13:14.152190 kernel: io scheduler bfq registered Mar 14 00:13:14.152201 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:13:14.152215 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 14 00:13:14.152228 kernel: hrtimer: interrupt took 6802599 ns Mar 14 00:13:14.152238 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 14 00:13:14.152248 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 14 00:13:14.152261 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:13:14.152274 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:13:14.152285 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:13:14.152305 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:13:14.152318 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:13:14.153184 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 14 00:13:14.153205 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 00:13:14.153405 kernel: rtc_cmos 00:04: registered as rtc0 Mar 14 00:13:14.153790 kernel: rtc_cmos 00:04: setting system clock to 2026-03-14T00:13:11 UTC (1773447191) Mar 14 00:13:14.154038 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 14 00:13:14.154056 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 14 00:13:14.154073 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:13:14.154083 kernel: Segment Routing with IPv6 Mar 14 00:13:14.154094 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:13:14.154105 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:13:14.154117 kernel: Key type dns_resolver registered Mar 14 00:13:14.154130 kernel: IPI shorthand broadcast: enabled Mar 14 00:13:14.154141 kernel: sched_clock: Marking stable (7386084379, 1419746451)->(10180653683, -1374822853) Mar 14 00:13:14.154151 kernel: registered taskstats version 1 Mar 14 00:13:14.154161 kernel: Loading compiled-in X.509 certificates Mar 14 00:13:14.154176 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:13:14.154187 kernel: Key type .fscrypt registered Mar 14 00:13:14.154197 kernel: Key type fscrypt-provisioning registered Mar 14 00:13:14.154208 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:13:14.154218 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:13:14.154228 kernel: ima: No architecture policies found Mar 14 00:13:14.154241 kernel: clk: Disabling unused clocks Mar 14 00:13:14.154252 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:13:14.154263 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:13:14.154281 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:13:14.154293 kernel: Run /init as init process Mar 14 00:13:14.154302 kernel: with arguments: Mar 14 00:13:14.154314 kernel: /init Mar 14 00:13:14.154324 kernel: with environment: Mar 14 00:13:14.154334 kernel: HOME=/ Mar 14 00:13:14.154344 kernel: TERM=linux Mar 14 00:13:14.154358 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:13:14.154375 systemd[1]: Detected virtualization kvm. Mar 14 00:13:14.154387 systemd[1]: Detected architecture x86-64. Mar 14 00:13:14.154397 systemd[1]: Running in initrd. Mar 14 00:13:14.154408 systemd[1]: No hostname configured, using default hostname. Mar 14 00:13:14.154419 systemd[1]: Hostname set to . Mar 14 00:13:14.154430 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:13:14.154441 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:13:14.154452 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:13:14.154467 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:13:14.154479 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:13:14.154591 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:13:14.154604 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:13:14.154615 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:13:14.154628 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:13:14.154644 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:13:14.154726 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:13:14.154739 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:13:14.154750 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:13:14.154762 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:13:14.154796 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:13:14.154812 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:13:14.154827 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:13:14.154839 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:13:14.154850 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:13:14.154862 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:13:14.154873 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:13:14.154885 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:13:14.154896 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:13:14.154908 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:13:14.154922 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:13:14.154934 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:13:14.154945 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:13:14.154960 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:13:14.154972 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:13:14.154985 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:13:14.154999 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:14.155010 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:13:14.155022 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:13:14.155081 systemd-journald[195]: Collecting audit messages is disabled. Mar 14 00:13:14.155114 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:13:14.155136 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:13:14.155148 systemd-journald[195]: Journal started Mar 14 00:13:14.155177 systemd-journald[195]: Runtime Journal (/run/log/journal/f92a09ad90154cf295482d0f5b3e7d2d) is 6.0M, max 48.4M, 42.3M free. Mar 14 00:13:14.102159 systemd-modules-load[196]: Inserted module 'overlay' Mar 14 00:13:14.909250 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:13:14.909301 kernel: Bridge firewalling registered Mar 14 00:13:14.295437 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 14 00:13:14.998365 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:13:14.984038 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:13:15.000176 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:15.015126 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:13:15.096229 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:15.106994 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:13:15.143078 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:13:15.161909 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:13:15.185470 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:15.220986 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:13:15.250116 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:13:15.299372 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:13:15.316241 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:15.375465 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:13:15.404905 dracut-cmdline[226]: dracut-dracut-053 Mar 14 00:13:15.413149 dracut-cmdline[226]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:13:15.542846 systemd-resolved[231]: Positive Trust Anchors: Mar 14 00:13:15.542921 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:13:15.542960 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:13:15.635429 systemd-resolved[231]: Defaulting to hostname 'linux'. Mar 14 00:13:15.649766 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:13:15.663445 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:13:15.766748 kernel: SCSI subsystem initialized Mar 14 00:13:15.787755 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:13:15.877607 kernel: iscsi: registered transport (tcp) Mar 14 00:13:15.960973 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:13:15.961252 kernel: QLogic iSCSI HBA Driver Mar 14 00:13:16.890836 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:13:16.968038 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:13:17.297805 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:13:17.297997 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:13:17.306739 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:13:17.551005 kernel: raid6: avx2x4 gen() 14875 MB/s Mar 14 00:13:17.573050 kernel: raid6: avx2x2 gen() 16072 MB/s Mar 14 00:13:17.599412 kernel: raid6: avx2x1 gen() 7521 MB/s Mar 14 00:13:17.600084 kernel: raid6: using algorithm avx2x2 gen() 16072 MB/s Mar 14 00:13:17.626427 kernel: raid6: .... xor() 9436 MB/s, rmw enabled Mar 14 00:13:17.628120 kernel: raid6: using avx2x2 recovery algorithm Mar 14 00:13:17.727782 kernel: xor: automatically using best checksumming function avx Mar 14 00:13:18.605450 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:13:18.658483 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:13:18.691887 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:13:18.724888 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 14 00:13:18.737464 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:13:18.766998 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:13:18.815959 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Mar 14 00:13:18.898482 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:13:18.931019 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:13:19.088865 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:13:19.124798 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:13:19.167175 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:13:19.189404 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:13:19.198580 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:13:19.206887 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:13:19.248833 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 14 00:13:19.252927 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:13:19.276317 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 14 00:13:19.276894 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:13:19.278738 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:13:19.278966 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:19.292468 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:19.318266 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:13:19.319279 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:19.344261 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:19.363628 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:13:19.363751 kernel: GPT:9289727 != 19775487 Mar 14 00:13:19.363772 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:13:19.369863 kernel: GPT:9289727 != 19775487 Mar 14 00:13:19.369913 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:13:19.373766 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:13:19.406220 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:13:19.472189 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:13:19.504436 kernel: libata version 3.00 loaded. Mar 14 00:13:19.571601 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:13:19.572620 kernel: AES CTR mode by8 optimization enabled Mar 14 00:13:19.575719 kernel: ahci 0000:00:1f.2: version 3.0 Mar 14 00:13:19.576222 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 14 00:13:19.580642 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 14 00:13:19.580976 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 14 00:13:19.592667 kernel: scsi host0: ahci Mar 14 00:13:19.598014 kernel: scsi host1: ahci Mar 14 00:13:19.605015 kernel: scsi host2: ahci Mar 14 00:13:19.606194 kernel: scsi host3: ahci Mar 14 00:13:19.610438 kernel: scsi host4: ahci Mar 14 00:13:19.612754 kernel: scsi host5: ahci Mar 14 00:13:19.613040 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 14 00:13:19.613062 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 14 00:13:19.613079 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 14 00:13:19.613096 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 14 00:13:19.613111 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 14 00:13:19.613138 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 14 00:13:19.842404 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (471) Mar 14 00:13:19.864009 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (474) Mar 14 00:13:19.900090 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 14 00:13:20.184146 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 14 00:13:20.184274 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 14 00:13:20.184291 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 14 00:13:20.184320 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 14 00:13:20.184336 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 14 00:13:20.184352 kernel: ata3.00: applying bridge limits Mar 14 00:13:20.184368 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 14 00:13:20.184383 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 14 00:13:20.184399 kernel: ata3.00: configured for UDMA/100 Mar 14 00:13:20.184414 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 14 00:13:20.223971 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:20.249428 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 14 00:13:20.256792 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 14 00:13:20.278800 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 00:13:20.342285 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 14 00:13:20.342895 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 14 00:13:20.312592 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 14 00:13:20.356885 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:13:20.369487 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:13:20.392800 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 14 00:13:20.418425 disk-uuid[567]: Primary Header is updated. Mar 14 00:13:20.418425 disk-uuid[567]: Secondary Entries is updated. Mar 14 00:13:20.418425 disk-uuid[567]: Secondary Header is updated. Mar 14 00:13:20.461624 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:13:20.471805 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:13:20.520100 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:21.495960 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:13:21.504403 disk-uuid[568]: The operation has completed successfully. Mar 14 00:13:21.634167 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:13:21.634782 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:13:21.682684 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:13:21.710731 sh[593]: Success Mar 14 00:13:21.771170 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 14 00:13:21.876843 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:13:21.901984 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:13:21.911567 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:13:21.974934 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:13:21.975030 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:13:21.975056 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:13:21.983677 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:13:21.983815 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:13:22.022361 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:13:22.045964 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:13:22.066291 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:13:22.081745 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:13:22.122474 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:13:22.122742 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:13:22.127949 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:13:22.150066 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:13:22.174275 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:13:22.189172 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:13:22.206406 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:13:22.240941 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:13:22.785136 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:13:22.825311 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:13:22.932218 systemd-networkd[774]: lo: Link UP Mar 14 00:13:22.932264 systemd-networkd[774]: lo: Gained carrier Mar 14 00:13:22.944593 systemd-networkd[774]: Enumeration completed Mar 14 00:13:22.948400 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:13:22.952025 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:22.952070 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:13:22.961604 systemd-networkd[774]: eth0: Link UP Mar 14 00:13:22.961617 systemd-networkd[774]: eth0: Gained carrier Mar 14 00:13:22.961632 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:13:23.009215 systemd[1]: Reached target network.target - Network. Mar 14 00:13:23.026147 systemd-networkd[774]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 14 00:13:23.078293 ignition[681]: Ignition 2.19.0 Mar 14 00:13:23.078358 ignition[681]: Stage: fetch-offline Mar 14 00:13:23.078437 ignition[681]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:23.078459 ignition[681]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:13:23.078916 ignition[681]: parsed url from cmdline: "" Mar 14 00:13:23.078924 ignition[681]: no config URL provided Mar 14 00:13:23.078933 ignition[681]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:13:23.078949 ignition[681]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:13:23.078996 ignition[681]: op(1): [started] loading QEMU firmware config module Mar 14 00:13:23.079005 ignition[681]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 14 00:13:23.127241 ignition[681]: op(1): [finished] loading QEMU firmware config module Mar 14 00:13:23.370082 systemd-resolved[231]: Detected conflict on linux IN A 10.0.0.36 Mar 14 00:13:23.370155 systemd-resolved[231]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Mar 14 00:13:23.583273 ignition[681]: parsing config with SHA512: e6793fd776d611bec10bbcc81668c40161da983d7736cfb927f43b8b297cd4ed77a2b558ca351f9f32bd4075554720a3cfa4b1ed3d878168c29cced647a18a76 Mar 14 00:13:23.612213 unknown[681]: fetched base config from "system" Mar 14 00:13:23.612235 unknown[681]: fetched user config from "qemu" Mar 14 00:13:23.613321 ignition[681]: fetch-offline: fetch-offline passed Mar 14 00:13:23.613838 ignition[681]: Ignition finished successfully Mar 14 00:13:23.635948 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:13:23.662120 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 14 00:13:23.684149 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:13:23.743109 ignition[785]: Ignition 2.19.0 Mar 14 00:13:23.743160 ignition[785]: Stage: kargs Mar 14 00:13:23.743437 ignition[785]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:23.743456 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:13:23.744953 ignition[785]: kargs: kargs passed Mar 14 00:13:23.748434 ignition[785]: Ignition finished successfully Mar 14 00:13:23.777658 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:13:23.818178 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:13:23.916870 ignition[793]: Ignition 2.19.0 Mar 14 00:13:23.916934 ignition[793]: Stage: disks Mar 14 00:13:23.917203 ignition[793]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:23.917225 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:13:23.941172 ignition[793]: disks: disks passed Mar 14 00:13:23.941312 ignition[793]: Ignition finished successfully Mar 14 00:13:23.955002 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:13:23.956049 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:13:23.964116 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:13:23.974186 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:13:23.996200 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:13:23.996389 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:13:24.035856 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:13:24.113805 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 14 00:13:24.131583 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:13:24.199968 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:13:24.785424 systemd-networkd[774]: eth0: Gained IPv6LL Mar 14 00:13:25.018097 kernel: EXT4-fs (vda9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:13:25.033015 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:13:25.080927 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:13:25.133174 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:13:25.171135 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:13:25.174774 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:13:25.174970 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:13:25.215315 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Mar 14 00:13:25.175090 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:13:25.265914 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:13:25.266039 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:13:25.266057 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:13:25.278094 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:13:25.286377 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:13:25.292847 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:13:25.320865 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:13:25.866016 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:13:25.919397 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:13:25.962423 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:13:25.989334 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:13:26.690916 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:13:26.758303 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:13:26.778175 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:13:26.809984 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:13:26.824267 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:13:27.385922 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:13:27.667877 ignition[925]: INFO : Ignition 2.19.0 Mar 14 00:13:27.667877 ignition[925]: INFO : Stage: mount Mar 14 00:13:27.683930 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:27.683930 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:13:27.683930 ignition[925]: INFO : mount: mount passed Mar 14 00:13:27.683930 ignition[925]: INFO : Ignition finished successfully Mar 14 00:13:27.687358 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:13:27.755036 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:13:27.853007 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:13:27.906263 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Mar 14 00:13:27.920337 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:13:27.920414 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:13:27.925993 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:13:27.978572 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:13:27.989202 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:13:28.104470 ignition[955]: INFO : Ignition 2.19.0 Mar 14 00:13:28.112074 ignition[955]: INFO : Stage: files Mar 14 00:13:28.112074 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:28.112074 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:13:28.157845 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:13:28.157845 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:13:28.157845 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:13:28.202904 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:13:28.215689 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:13:28.215689 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:13:28.208637 unknown[955]: wrote ssh authorized keys file for user: core Mar 14 00:13:28.273333 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:13:28.273333 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:13:28.305786 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:13:28.725777 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:13:28.725777 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:13:28.725777 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 14 00:13:29.027622 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 14 00:13:30.421867 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:13:30.421867 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:13:30.421867 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:13:30.421867 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:13:30.487950 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:13:30.487950 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:13:30.487950 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:13:30.487950 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:13:30.487950 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:13:30.487950 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:13:30.487950 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:13:30.487950 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:13:30.487950 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:13:30.487950 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:13:30.487950 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Mar 14 00:13:30.817072 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 14 00:13:36.515007 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Mar 14 00:13:36.515007 ignition[955]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 14 00:13:36.565378 ignition[955]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:13:36.565378 ignition[955]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:13:36.565378 ignition[955]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 14 00:13:36.565378 ignition[955]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 14 00:13:36.565378 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 14 00:13:36.565378 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 14 00:13:36.565378 ignition[955]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 14 00:13:36.565378 ignition[955]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 14 00:13:36.760922 ignition[955]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 14 00:13:36.798126 ignition[955]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 14 00:13:36.798126 ignition[955]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 14 00:13:36.798126 ignition[955]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:13:36.798126 ignition[955]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:13:36.798126 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:13:36.798126 ignition[955]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:13:36.798126 ignition[955]: INFO : files: files passed Mar 14 00:13:36.798126 ignition[955]: INFO : Ignition finished successfully Mar 14 00:13:36.822356 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:13:37.013750 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:13:37.040135 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:13:37.057634 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:13:37.057921 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:13:37.079667 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Mar 14 00:13:37.089923 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:13:37.118223 initrd-setup-root-after-ignition[985]: grep: Mar 14 00:13:37.118223 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:13:37.095035 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:13:37.164418 initrd-setup-root-after-ignition[985]: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:13:37.122878 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:13:37.198756 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:13:37.596279 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:13:37.597574 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:13:37.629233 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:13:37.655872 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:13:37.682638 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:13:37.725719 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:13:37.903762 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:13:37.988757 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:13:38.052110 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:13:38.078987 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:13:38.095966 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:13:38.118741 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:13:38.145428 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:13:38.164402 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:13:38.183991 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:13:38.210860 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:13:38.223745 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:13:38.263049 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:13:38.294706 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:13:38.299395 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:13:38.340185 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:13:38.362748 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:13:38.381366 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:13:38.404564 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:13:38.408611 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:13:38.444689 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:13:38.450179 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:13:38.457313 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:13:38.458136 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:13:38.697141 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:13:38.697463 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:13:38.751683 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:13:38.754216 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:13:38.774653 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:13:38.776565 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:13:38.779188 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:13:38.796286 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:13:38.809926 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:13:38.824587 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:13:38.850108 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:13:38.864641 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:13:38.864902 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:13:38.894308 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:13:38.896369 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:13:38.897213 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:13:38.897387 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:13:38.964710 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:13:39.120885 ignition[1009]: INFO : Ignition 2.19.0 Mar 14 00:13:39.120885 ignition[1009]: INFO : Stage: umount Mar 14 00:13:39.120885 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:13:39.120885 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:13:39.120885 ignition[1009]: INFO : umount: umount passed Mar 14 00:13:39.120885 ignition[1009]: INFO : Ignition finished successfully Mar 14 00:13:38.984588 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:13:38.997150 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:13:39.000262 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:13:39.012101 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:13:39.012371 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:13:39.056688 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:13:39.056992 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:13:39.103744 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:13:39.112615 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:13:39.112867 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:13:39.147371 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:13:39.147870 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:13:39.170212 systemd[1]: Stopped target network.target - Network. Mar 14 00:13:39.177285 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:13:39.178121 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:13:39.180720 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:13:39.181054 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:13:39.193445 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:13:39.195721 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:13:39.204117 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:13:39.204220 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:13:39.223231 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:13:39.223347 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:13:39.254332 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:13:39.256168 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:13:39.283020 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:13:39.283379 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:13:39.284981 systemd-networkd[774]: eth0: DHCPv6 lease lost Mar 14 00:13:39.317323 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:13:39.318010 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:13:39.582865 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:13:39.583090 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:13:39.687231 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:13:39.691206 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:13:39.691442 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:13:39.754461 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:13:39.755355 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:39.763943 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:13:39.764043 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:13:39.786166 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:13:39.786263 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:13:39.806773 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:13:39.858365 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:13:39.858905 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:13:39.966691 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:13:39.966946 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:13:39.984480 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:13:39.984675 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:13:40.007266 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:13:40.007444 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:13:40.021867 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:13:40.022011 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:13:40.050644 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:13:40.051316 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:13:40.093086 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:13:40.113262 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:13:40.113655 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:13:40.114218 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 14 00:13:40.114315 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:13:40.264029 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:13:40.264457 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:13:40.285029 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:13:40.285303 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:13:40.393972 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:13:40.394160 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:13:40.453190 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:13:40.453453 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:13:40.462434 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:13:40.600748 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:13:40.697448 systemd[1]: Switching root. Mar 14 00:13:40.804986 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 14 00:13:40.805353 systemd-journald[195]: Journal stopped Mar 14 00:13:52.804432 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:13:52.805009 kernel: SELinux: policy capability open_perms=1 Mar 14 00:13:52.805039 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:13:52.805055 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:13:52.805072 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:13:52.805096 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:13:52.805153 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:13:52.805169 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:13:52.805224 kernel: audit: type=1403 audit(1773447221.974:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:13:52.805243 systemd[1]: Successfully loaded SELinux policy in 484.810ms. Mar 14 00:13:52.805313 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 99.640ms. Mar 14 00:13:52.805392 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:13:52.805407 systemd[1]: Detected virtualization kvm. Mar 14 00:13:52.805419 systemd[1]: Detected architecture x86-64. Mar 14 00:13:52.805430 systemd[1]: Detected first boot. Mar 14 00:13:52.805441 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:13:52.805586 zram_generator::config[1055]: No configuration found. Mar 14 00:13:52.805666 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:13:52.805698 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:13:52.805712 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:13:52.805723 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:13:52.805735 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:13:52.805746 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:13:52.805758 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:13:52.805769 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:13:52.805878 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:13:52.805895 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:13:52.805907 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:13:52.805918 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:13:52.805929 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:13:52.805941 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:13:52.805952 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:13:52.806003 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:13:52.806015 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:13:52.806062 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:13:52.806074 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:13:52.806116 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:13:52.806128 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:13:52.806140 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:13:52.806151 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:13:52.806163 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:13:52.806211 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:13:52.806278 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:13:52.806302 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:13:52.806319 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:13:52.806336 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:13:52.806352 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:13:52.806368 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:13:52.806385 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:13:52.806400 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:13:52.806419 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:13:52.806487 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:13:52.806584 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:13:52.806639 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:13:52.806656 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:13:52.806673 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:13:52.806697 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:13:52.806713 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:13:52.806766 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:13:52.806819 systemd[1]: Reached target machines.target - Containers. Mar 14 00:13:52.806902 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:13:52.806920 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:13:52.806937 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:13:52.806953 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:13:52.806969 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:13:52.806985 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:13:52.807001 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:13:52.807017 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:13:52.807077 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:13:52.807096 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:13:52.807112 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:13:52.807127 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:13:52.807146 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:13:52.809210 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:13:52.811039 kernel: ACPI: bus type drm_connector registered Mar 14 00:13:52.811065 kernel: loop: module loaded Mar 14 00:13:52.811234 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:13:52.811255 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:13:52.811271 kernel: fuse: init (API version 7.39) Mar 14 00:13:52.811287 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:13:52.811303 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:13:52.811395 systemd-journald[1141]: Collecting audit messages is disabled. Mar 14 00:13:52.811487 systemd-journald[1141]: Journal started Mar 14 00:13:52.811675 systemd-journald[1141]: Runtime Journal (/run/log/journal/f92a09ad90154cf295482d0f5b3e7d2d) is 6.0M, max 48.4M, 42.3M free. Mar 14 00:13:49.607378 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:13:49.785447 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 14 00:13:49.787134 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:13:49.788203 systemd[1]: systemd-journald.service: Consumed 3.109s CPU time. Mar 14 00:13:52.857946 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:13:52.872116 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:13:52.890902 systemd[1]: Stopped verity-setup.service. Mar 14 00:13:52.908624 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:13:52.916645 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:13:52.925309 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:13:52.965680 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:13:52.976829 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:13:52.990031 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:13:52.998964 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:13:53.017908 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:13:53.056896 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:13:53.072217 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:13:53.086388 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:13:53.088081 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:13:53.103654 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:13:53.104214 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:13:53.113137 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:13:53.113785 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:13:53.125316 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:13:53.134808 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:13:53.172165 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:13:53.175445 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:13:53.192466 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:13:53.194653 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:13:53.216112 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:13:53.274897 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:13:53.296303 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:13:53.364152 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:13:53.416901 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:13:53.470029 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:13:53.475953 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:13:53.476017 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:13:53.491722 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:13:53.526188 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:13:53.573373 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:13:53.587669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:13:53.603981 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:13:53.614212 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:13:53.626593 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:13:55.884455 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:13:55.919680 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:13:55.931948 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:13:55.942416 systemd-journald[1141]: Time spent on flushing to /var/log/journal/f92a09ad90154cf295482d0f5b3e7d2d is 149.890ms for 949 entries. Mar 14 00:13:55.942416 systemd-journald[1141]: System Journal (/var/log/journal/f92a09ad90154cf295482d0f5b3e7d2d) is 8.0M, max 195.6M, 187.6M free. Mar 14 00:13:56.225416 systemd-journald[1141]: Received client request to flush runtime journal. Mar 14 00:13:55.956102 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:13:55.979044 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:13:55.996943 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:13:56.009187 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:13:56.026789 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:13:56.115222 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:13:56.135292 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:13:56.201588 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:13:56.275966 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:13:56.316973 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:13:56.361826 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:13:56.417195 kernel: loop0: detected capacity change from 0 to 142488 Mar 14 00:13:56.464834 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:13:56.520173 udevadm[1182]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 14 00:13:56.577420 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:13:56.583314 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:13:56.604080 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Mar 14 00:13:56.604128 systemd-tmpfiles[1172]: ACLs are not supported, ignoring. Mar 14 00:13:56.619367 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:13:56.622644 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:13:56.674032 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:13:56.711778 kernel: loop1: detected capacity change from 0 to 217752 Mar 14 00:13:57.055051 kernel: loop2: detected capacity change from 0 to 140768 Mar 14 00:13:57.206669 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:13:57.341356 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:13:57.358587 kernel: loop3: detected capacity change from 0 to 142488 Mar 14 00:13:57.440351 kernel: loop4: detected capacity change from 0 to 217752 Mar 14 00:13:57.520306 kernel: loop5: detected capacity change from 0 to 140768 Mar 14 00:13:57.518283 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Mar 14 00:13:57.518311 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Mar 14 00:13:57.725236 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:13:57.766161 (sd-merge)[1196]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 14 00:13:57.767419 (sd-merge)[1196]: Merged extensions into '/usr'. Mar 14 00:13:57.829322 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:13:57.830044 systemd[1]: Reloading... Mar 14 00:13:58.659932 zram_generator::config[1223]: No configuration found. Mar 14 00:14:00.391114 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:14:00.431821 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:14:00.875966 systemd[1]: Reloading finished in 2976 ms. Mar 14 00:14:00.975382 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:14:00.988127 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:14:01.052097 systemd[1]: Starting ensure-sysext.service... Mar 14 00:14:01.074731 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:14:01.095368 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:14:01.095391 systemd[1]: Reloading... Mar 14 00:14:02.162634 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:14:02.165655 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:14:02.173981 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:14:02.174661 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 14 00:14:02.174856 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 14 00:14:02.199416 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:14:02.199768 systemd-tmpfiles[1262]: Skipping /boot Mar 14 00:14:02.202641 zram_generator::config[1284]: No configuration found. Mar 14 00:14:02.273733 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:14:02.273802 systemd-tmpfiles[1262]: Skipping /boot Mar 14 00:14:02.904805 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:14:03.075142 systemd[1]: Reloading finished in 1978 ms. Mar 14 00:14:03.139342 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:14:03.179824 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:14:03.262084 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:14:03.285353 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:14:03.318154 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:14:03.365593 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:14:03.371834 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:14:03.406300 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:14:03.438344 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:14:03.439308 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:14:03.455153 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:14:03.500616 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:14:03.516846 augenrules[1350]: No rules Mar 14 00:14:03.547292 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:14:03.563432 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:14:03.564194 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:14:03.571423 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:14:03.593480 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:14:03.600638 systemd-udevd[1339]: Using default interface naming scheme 'v255'. Mar 14 00:14:03.612121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:14:03.612751 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:14:03.660853 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:14:03.661432 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:14:03.685157 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:14:03.685657 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:14:03.785239 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:14:03.797288 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:14:03.831196 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:14:03.868757 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:14:03.889719 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:14:03.899641 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:14:03.910404 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:14:03.974591 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:14:03.975088 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:14:03.992641 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:14:04.023027 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:14:04.095739 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:14:04.130464 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:14:04.187777 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:14:04.227139 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:14:04.257391 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:14:04.260157 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:14:04.278185 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:14:04.290997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:14:04.291606 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:14:04.301744 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:14:04.302240 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:14:04.318305 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:14:04.321197 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:14:04.383419 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:14:04.396113 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:14:04.656376 systemd[1]: Finished ensure-sysext.service. Mar 14 00:14:04.989265 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:14:04.993735 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:14:05.015036 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 14 00:14:05.022169 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:14:05.310624 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 00:14:05.385024 systemd-resolved[1338]: Positive Trust Anchors: Mar 14 00:14:05.385763 systemd-resolved[1338]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:14:05.385880 systemd-resolved[1338]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:14:05.396113 systemd-resolved[1338]: Defaulting to hostname 'linux'. Mar 14 00:14:05.401872 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:14:05.420693 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:14:05.701652 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1388) Mar 14 00:14:05.821675 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 14 00:14:05.855654 kernel: ACPI: button: Power Button [PWRF] Mar 14 00:14:05.861255 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 14 00:14:05.875397 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:14:05.877433 systemd-networkd[1393]: lo: Link UP Mar 14 00:14:05.877441 systemd-networkd[1393]: lo: Gained carrier Mar 14 00:14:06.085419 systemd-networkd[1393]: Enumeration completed Mar 14 00:14:06.092109 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:14:06.093698 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:14:06.093706 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:14:06.105334 systemd[1]: Reached target network.target - Network. Mar 14 00:14:06.105889 systemd-networkd[1393]: eth0: Link UP Mar 14 00:14:06.105949 systemd-networkd[1393]: eth0: Gained carrier Mar 14 00:14:06.105983 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:14:06.155762 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:14:06.180885 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 00:14:06.188703 systemd-networkd[1393]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 14 00:14:06.192715 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 14 00:14:06.193097 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 14 00:14:06.193404 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 14 00:14:06.191797 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Mar 14 00:14:06.200037 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 14 00:14:06.200258 systemd-timesyncd[1404]: Initial clock synchronization to Sat 2026-03-14 00:14:06.107909 UTC. Mar 14 00:14:06.258359 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:14:06.348809 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:14:06.387148 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:14:06.512074 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 14 00:14:06.732616 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:14:07.380817 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:14:07.531408 systemd-networkd[1393]: eth0: Gained IPv6LL Mar 14 00:14:07.543273 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:14:07.551464 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:14:07.640969 kernel: kvm_amd: TSC scaling supported Mar 14 00:14:07.642968 kernel: kvm_amd: Nested Virtualization enabled Mar 14 00:14:07.643013 kernel: kvm_amd: Nested Paging enabled Mar 14 00:14:07.646294 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 14 00:14:07.651136 kernel: kvm_amd: PMU virtualization is disabled Mar 14 00:14:08.219382 kernel: EDAC MC: Ver: 3.0.0 Mar 14 00:14:08.275769 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:14:08.317883 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:14:08.386166 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:14:08.461303 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:14:08.471411 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:14:08.480137 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:14:08.488258 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:14:08.497186 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:14:08.506268 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:14:08.520194 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:14:08.531241 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:14:08.544603 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:14:08.544678 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:14:08.550872 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:14:08.560219 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:14:08.581891 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:14:08.607229 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:14:08.635168 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:14:08.657646 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:14:08.674476 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:14:08.683693 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:14:08.692271 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:14:08.692311 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:14:08.804323 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:14:08.809214 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:14:08.879953 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 14 00:14:08.936065 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:14:08.957817 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:14:08.981011 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:14:08.983387 jq[1435]: false Mar 14 00:14:08.987258 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:14:09.007292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:09.026220 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:14:09.037052 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:14:09.050640 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:14:09.075671 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:14:09.089945 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:14:09.114908 extend-filesystems[1436]: Found loop3 Mar 14 00:14:09.134636 dbus-daemon[1434]: [system] SELinux support is enabled Mar 14 00:14:09.130908 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:14:09.144851 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:14:09.145827 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:14:09.150899 extend-filesystems[1436]: Found loop4 Mar 14 00:14:09.150899 extend-filesystems[1436]: Found loop5 Mar 14 00:14:09.150899 extend-filesystems[1436]: Found sr0 Mar 14 00:14:09.150899 extend-filesystems[1436]: Found vda Mar 14 00:14:09.150899 extend-filesystems[1436]: Found vda1 Mar 14 00:14:09.150899 extend-filesystems[1436]: Found vda2 Mar 14 00:14:09.150899 extend-filesystems[1436]: Found vda3 Mar 14 00:14:09.150899 extend-filesystems[1436]: Found usr Mar 14 00:14:09.150899 extend-filesystems[1436]: Found vda4 Mar 14 00:14:09.150899 extend-filesystems[1436]: Found vda6 Mar 14 00:14:09.150899 extend-filesystems[1436]: Found vda7 Mar 14 00:14:09.150899 extend-filesystems[1436]: Found vda9 Mar 14 00:14:09.150899 extend-filesystems[1436]: Checking size of /dev/vda9 Mar 14 00:14:09.303557 extend-filesystems[1436]: Resized partition /dev/vda9 Mar 14 00:14:09.172309 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:14:09.311903 extend-filesystems[1464]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:14:09.332754 jq[1460]: true Mar 14 00:14:09.192101 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:14:09.198578 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:14:09.262926 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:14:09.285257 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:14:09.293200 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:14:09.318359 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:14:09.318847 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:14:09.350761 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:14:09.364634 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1377) Mar 14 00:14:09.398209 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 14 00:14:09.395131 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:14:09.398230 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:14:09.400424 systemd-logind[1451]: Watching system buttons on /dev/input/event1 (Power Button) Mar 14 00:14:09.400615 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 00:14:09.407785 systemd-logind[1451]: New seat seat0. Mar 14 00:14:09.418434 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:14:09.485987 update_engine[1454]: I20260314 00:14:09.485872 1454 main.cc:92] Flatcar Update Engine starting Mar 14 00:14:09.496100 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:14:09.510837 jq[1470]: true Mar 14 00:14:09.511137 update_engine[1454]: I20260314 00:14:09.500006 1454 update_check_scheduler.cc:74] Next update check in 6m6s Mar 14 00:14:09.561811 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 14 00:14:09.561187 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 14 00:14:09.561658 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 14 00:14:09.761827 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:14:09.778225 tar[1469]: linux-amd64/LICENSE Mar 14 00:14:09.778225 tar[1469]: linux-amd64/helm Mar 14 00:14:09.750244 dbus-daemon[1434]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 14 00:14:09.776365 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:14:09.776973 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:14:09.777173 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:14:09.789302 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:14:09.789626 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:14:09.806014 extend-filesystems[1464]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 14 00:14:09.806014 extend-filesystems[1464]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 14 00:14:09.806014 extend-filesystems[1464]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 14 00:14:09.841907 extend-filesystems[1436]: Resized filesystem in /dev/vda9 Mar 14 00:14:09.843799 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:14:10.031184 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:14:10.031811 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:14:10.059083 bash[1505]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:14:10.061733 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:14:10.068848 sshd_keygen[1459]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:14:10.086323 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 14 00:14:10.589438 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:14:10.649996 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:14:10.693898 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:14:10.694313 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:14:10.727026 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:14:11.251631 locksmithd[1498]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:14:11.701144 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:14:11.989957 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:14:12.028093 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:14:12.039466 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:14:15.300702 containerd[1471]: time="2026-03-14T00:14:15.282857098Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:14:15.746972 containerd[1471]: time="2026-03-14T00:14:15.745655202Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:15.768178 containerd[1471]: time="2026-03-14T00:14:15.767282295Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:15.768178 containerd[1471]: time="2026-03-14T00:14:15.767443084Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:14:15.768178 containerd[1471]: time="2026-03-14T00:14:15.767474495Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:14:15.769636 containerd[1471]: time="2026-03-14T00:14:15.769217029Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:14:15.769636 containerd[1471]: time="2026-03-14T00:14:15.769253392Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:15.771408 containerd[1471]: time="2026-03-14T00:14:15.771374292Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:15.771597 containerd[1471]: time="2026-03-14T00:14:15.771484520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:15.772289 containerd[1471]: time="2026-03-14T00:14:15.772261924Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:15.772435 containerd[1471]: time="2026-03-14T00:14:15.772350276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:15.772619 containerd[1471]: time="2026-03-14T00:14:15.772594923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:15.772685 containerd[1471]: time="2026-03-14T00:14:15.772668039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:15.773087 containerd[1471]: time="2026-03-14T00:14:15.772975249Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:15.774006 containerd[1471]: time="2026-03-14T00:14:15.773975084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:14:15.775773 containerd[1471]: time="2026-03-14T00:14:15.774838983Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:14:15.775773 containerd[1471]: time="2026-03-14T00:14:15.774867359Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:14:15.775773 containerd[1471]: time="2026-03-14T00:14:15.775327349Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:14:15.775773 containerd[1471]: time="2026-03-14T00:14:15.775470345Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:14:15.889425 containerd[1471]: time="2026-03-14T00:14:15.887299417Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:14:15.903356 containerd[1471]: time="2026-03-14T00:14:15.900944891Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:14:15.903356 containerd[1471]: time="2026-03-14T00:14:15.901134935Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:14:15.903356 containerd[1471]: time="2026-03-14T00:14:15.901174233Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:14:15.903356 containerd[1471]: time="2026-03-14T00:14:15.901416753Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:14:15.903356 containerd[1471]: time="2026-03-14T00:14:15.902465653Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:14:16.136000 containerd[1471]: time="2026-03-14T00:14:16.135344112Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:14:16.142049 containerd[1471]: time="2026-03-14T00:14:16.138637049Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:14:16.142049 containerd[1471]: time="2026-03-14T00:14:16.138677226Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:14:16.142049 containerd[1471]: time="2026-03-14T00:14:16.138698536Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:14:16.142049 containerd[1471]: time="2026-03-14T00:14:16.138719319Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:14:16.142049 containerd[1471]: time="2026-03-14T00:14:16.139165067Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:14:16.142049 containerd[1471]: time="2026-03-14T00:14:16.139282809Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:14:16.142049 containerd[1471]: time="2026-03-14T00:14:16.139689421Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:14:16.142049 containerd[1471]: time="2026-03-14T00:14:16.139973947Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:14:16.142049 containerd[1471]: time="2026-03-14T00:14:16.140003087Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:14:16.142049 containerd[1471]: time="2026-03-14T00:14:16.140021203Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:14:16.142049 containerd[1471]: time="2026-03-14T00:14:16.140036452Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:14:16.142611 containerd[1471]: time="2026-03-14T00:14:16.142250335Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:14:16.142611 containerd[1471]: time="2026-03-14T00:14:16.142376166Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:14:16.142611 containerd[1471]: time="2026-03-14T00:14:16.142403858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:14:16.142611 containerd[1471]: time="2026-03-14T00:14:16.142426079Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:14:16.142611 containerd[1471]: time="2026-03-14T00:14:16.142445561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:14:16.142611 containerd[1471]: time="2026-03-14T00:14:16.142480535Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:14:16.145951 containerd[1471]: time="2026-03-14T00:14:16.142890352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:14:16.145951 containerd[1471]: time="2026-03-14T00:14:16.142971533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:14:16.145951 containerd[1471]: time="2026-03-14T00:14:16.143001902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:14:16.145951 containerd[1471]: time="2026-03-14T00:14:16.143023502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:14:16.145951 containerd[1471]: time="2026-03-14T00:14:16.143040539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:14:16.145951 containerd[1471]: time="2026-03-14T00:14:16.143056817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:14:16.145951 containerd[1471]: time="2026-03-14T00:14:16.143125524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:14:16.145951 containerd[1471]: time="2026-03-14T00:14:16.143200444Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:14:16.145951 containerd[1471]: time="2026-03-14T00:14:16.143388140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:14:16.145951 containerd[1471]: time="2026-03-14T00:14:16.143414395Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:14:16.145951 containerd[1471]: time="2026-03-14T00:14:16.143433479Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:14:16.145951 containerd[1471]: time="2026-03-14T00:14:16.144068952Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:14:16.145951 containerd[1471]: time="2026-03-14T00:14:16.144103645Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:14:16.145951 containerd[1471]: time="2026-03-14T00:14:16.144120393Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:14:16.146328 containerd[1471]: time="2026-03-14T00:14:16.144186783Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:14:16.146328 containerd[1471]: time="2026-03-14T00:14:16.144204419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:14:16.146328 containerd[1471]: time="2026-03-14T00:14:16.144273955Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:14:16.146328 containerd[1471]: time="2026-03-14T00:14:16.144376148Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:14:16.146328 containerd[1471]: time="2026-03-14T00:14:16.144445564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:14:16.148861 containerd[1471]: time="2026-03-14T00:14:16.148343085Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:14:16.148861 containerd[1471]: time="2026-03-14T00:14:16.148639346Z" level=info msg="Connect containerd service" Mar 14 00:14:16.153170 containerd[1471]: time="2026-03-14T00:14:16.148894262Z" level=info msg="using legacy CRI server" Mar 14 00:14:16.153170 containerd[1471]: time="2026-03-14T00:14:16.148912697Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:14:16.155009 containerd[1471]: time="2026-03-14T00:14:16.152288483Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:14:16.157959 containerd[1471]: time="2026-03-14T00:14:16.156946349Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:14:16.157959 containerd[1471]: time="2026-03-14T00:14:16.157419012Z" level=info msg="Start subscribing containerd event" Mar 14 00:14:16.159966 containerd[1471]: time="2026-03-14T00:14:16.159353752Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:14:16.159966 containerd[1471]: time="2026-03-14T00:14:16.159447076Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:14:16.164616 containerd[1471]: time="2026-03-14T00:14:16.162840158Z" level=info msg="Start recovering state" Mar 14 00:14:16.164616 containerd[1471]: time="2026-03-14T00:14:16.163460911Z" level=info msg="Start event monitor" Mar 14 00:14:16.166848 containerd[1471]: time="2026-03-14T00:14:16.166604660Z" level=info msg="Start snapshots syncer" Mar 14 00:14:16.167163 containerd[1471]: time="2026-03-14T00:14:16.166917718Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:14:16.167163 containerd[1471]: time="2026-03-14T00:14:16.166988662Z" level=info msg="Start streaming server" Mar 14 00:14:16.172952 containerd[1471]: time="2026-03-14T00:14:16.170983273Z" level=info msg="containerd successfully booted in 1.113362s" Mar 14 00:14:16.200464 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:14:16.210169 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:14:16.235891 systemd[1]: Started sshd@0-10.0.0.36:22-10.0.0.1:48634.service - OpenSSH per-connection server daemon (10.0.0.1:48634). Mar 14 00:14:16.569804 tar[1469]: linux-amd64/README.md Mar 14 00:14:17.115332 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:14:17.223701 sshd[1540]: Accepted publickey for core from 10.0.0.1 port 48634 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:14:17.436001 sshd[1540]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:17.513097 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:14:17.549037 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:14:17.698441 systemd-logind[1451]: New session 1 of user core. Mar 14 00:14:18.356140 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:14:18.414921 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:14:18.517469 (systemd)[1551]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:14:19.357035 systemd[1551]: Queued start job for default target default.target. Mar 14 00:14:19.384123 systemd[1551]: Created slice app.slice - User Application Slice. Mar 14 00:14:19.384313 systemd[1551]: Reached target paths.target - Paths. Mar 14 00:14:19.384335 systemd[1551]: Reached target timers.target - Timers. Mar 14 00:14:19.411702 systemd[1551]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:14:19.911204 systemd[1551]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:14:19.911707 systemd[1551]: Reached target sockets.target - Sockets. Mar 14 00:14:19.911745 systemd[1551]: Reached target basic.target - Basic System. Mar 14 00:14:19.915921 systemd[1551]: Reached target default.target - Main User Target. Mar 14 00:14:19.915988 systemd[1551]: Startup finished in 1.367s. Mar 14 00:14:19.917254 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:14:20.351792 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:14:20.817953 systemd[1]: Started sshd@1-10.0.0.36:22-10.0.0.1:40060.service - OpenSSH per-connection server daemon (10.0.0.1:40060). Mar 14 00:14:21.701623 sshd[1562]: Accepted publickey for core from 10.0.0.1 port 40060 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:14:21.715197 sshd[1562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:21.764331 systemd-logind[1451]: New session 2 of user core. Mar 14 00:14:21.778138 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:14:22.232808 sshd[1562]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:22.253282 systemd[1]: sshd@1-10.0.0.36:22-10.0.0.1:40060.service: Deactivated successfully. Mar 14 00:14:22.260293 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:14:22.294667 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:14:22.315872 systemd[1]: Started sshd@2-10.0.0.36:22-10.0.0.1:40062.service - OpenSSH per-connection server daemon (10.0.0.1:40062). Mar 14 00:14:22.322954 systemd-logind[1451]: Removed session 2. Mar 14 00:14:22.830852 sshd[1569]: Accepted publickey for core from 10.0.0.1 port 40062 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:14:22.844983 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:22.901477 systemd-logind[1451]: New session 3 of user core. Mar 14 00:14:22.917854 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:14:23.058059 sshd[1569]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:23.078934 systemd[1]: sshd@2-10.0.0.36:22-10.0.0.1:40062.service: Deactivated successfully. Mar 14 00:14:23.090779 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:14:23.100427 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:14:23.108688 systemd-logind[1451]: Removed session 3. Mar 14 00:14:24.075225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:24.079165 (kubelet)[1580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:24.205856 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:14:24.220412 systemd[1]: Startup finished in 7.998s (kernel) + 29.143s (initrd) + 42.698s (userspace) = 1min 19.840s. Mar 14 00:14:31.012091 kubelet[1580]: E0314 00:14:31.010946 1580 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:31.033186 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:31.033701 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:31.034692 systemd[1]: kubelet.service: Consumed 12.103s CPU time. Mar 14 00:14:33.138067 systemd[1]: Started sshd@3-10.0.0.36:22-10.0.0.1:43872.service - OpenSSH per-connection server daemon (10.0.0.1:43872). Mar 14 00:14:33.427077 sshd[1589]: Accepted publickey for core from 10.0.0.1 port 43872 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:14:33.507215 sshd[1589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:33.570128 systemd-logind[1451]: New session 4 of user core. Mar 14 00:14:33.617216 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:14:33.747186 sshd[1589]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:33.782740 systemd[1]: sshd@3-10.0.0.36:22-10.0.0.1:43872.service: Deactivated successfully. Mar 14 00:14:33.786933 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:14:33.811443 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:14:33.849641 systemd[1]: Started sshd@4-10.0.0.36:22-10.0.0.1:43888.service - OpenSSH per-connection server daemon (10.0.0.1:43888). Mar 14 00:14:33.875845 systemd-logind[1451]: Removed session 4. Mar 14 00:14:34.073156 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 43888 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:14:34.082067 sshd[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:34.395787 systemd-logind[1451]: New session 5 of user core. Mar 14 00:14:34.480296 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:14:34.733391 sshd[1596]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:34.841799 systemd[1]: sshd@4-10.0.0.36:22-10.0.0.1:43888.service: Deactivated successfully. Mar 14 00:14:34.845241 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:14:34.867161 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:14:34.897624 systemd[1]: Started sshd@5-10.0.0.36:22-10.0.0.1:43902.service - OpenSSH per-connection server daemon (10.0.0.1:43902). Mar 14 00:14:34.939854 systemd-logind[1451]: Removed session 5. Mar 14 00:14:35.505258 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 43902 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:14:35.530377 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:35.635002 systemd-logind[1451]: New session 6 of user core. Mar 14 00:14:35.808801 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:14:36.314849 sshd[1603]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:36.384099 systemd[1]: sshd@5-10.0.0.36:22-10.0.0.1:43902.service: Deactivated successfully. Mar 14 00:14:36.398741 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:14:36.402606 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:14:36.425347 systemd[1]: Started sshd@6-10.0.0.36:22-10.0.0.1:43908.service - OpenSSH per-connection server daemon (10.0.0.1:43908). Mar 14 00:14:36.431855 systemd-logind[1451]: Removed session 6. Mar 14 00:14:36.536822 sshd[1610]: Accepted publickey for core from 10.0.0.1 port 43908 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:14:36.539115 sshd[1610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:36.587650 systemd-logind[1451]: New session 7 of user core. Mar 14 00:14:36.598860 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:14:36.774071 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:14:36.775750 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:36.826453 sudo[1613]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:36.872747 sshd[1610]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:36.905144 systemd[1]: sshd@6-10.0.0.36:22-10.0.0.1:43908.service: Deactivated successfully. Mar 14 00:14:36.914430 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:14:36.926100 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:14:36.960063 systemd[1]: Started sshd@7-10.0.0.36:22-10.0.0.1:43938.service - OpenSSH per-connection server daemon (10.0.0.1:43938). Mar 14 00:14:36.964435 systemd-logind[1451]: Removed session 7. Mar 14 00:14:37.071335 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 43938 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:14:37.079467 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:37.117486 systemd-logind[1451]: New session 8 of user core. Mar 14 00:14:37.132408 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:14:37.266170 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:14:37.266834 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:37.288622 sudo[1622]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:37.328209 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:14:37.329002 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:37.386083 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:14:37.393976 auditctl[1625]: No rules Mar 14 00:14:37.395169 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:14:37.395712 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:14:37.440061 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:14:37.656657 augenrules[1643]: No rules Mar 14 00:14:37.663233 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:14:37.707917 sudo[1621]: pam_unix(sudo:session): session closed for user root Mar 14 00:14:37.728979 sshd[1618]: pam_unix(sshd:session): session closed for user core Mar 14 00:14:37.773093 systemd[1]: sshd@7-10.0.0.36:22-10.0.0.1:43938.service: Deactivated successfully. Mar 14 00:14:37.781076 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:14:37.788396 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:14:37.812133 systemd[1]: Started sshd@8-10.0.0.36:22-10.0.0.1:43942.service - OpenSSH per-connection server daemon (10.0.0.1:43942). Mar 14 00:14:37.816317 systemd-logind[1451]: Removed session 8. Mar 14 00:14:38.177469 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 43942 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:14:38.184225 sshd[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:14:38.315615 systemd-logind[1451]: New session 9 of user core. Mar 14 00:14:38.331904 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:14:38.605669 sudo[1654]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:14:38.606469 sudo[1654]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:14:41.190847 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:14:41.232718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:41.767963 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:14:41.916886 (dockerd)[1675]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:14:44.713906 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:44.719321 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:44.905267 dockerd[1675]: time="2026-03-14T00:14:44.903438947Z" level=info msg="Starting up" Mar 14 00:14:45.218050 kubelet[1690]: E0314 00:14:45.216827 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:14:45.225798 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:14:45.226165 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:14:45.229251 systemd[1]: kubelet.service: Consumed 2.650s CPU time. Mar 14 00:14:45.877044 dockerd[1675]: time="2026-03-14T00:14:45.875205288Z" level=info msg="Loading containers: start." Mar 14 00:14:47.582441 kernel: Initializing XFRM netlink socket Mar 14 00:14:48.789716 systemd-networkd[1393]: docker0: Link UP Mar 14 00:14:48.972719 dockerd[1675]: time="2026-03-14T00:14:48.969282237Z" level=info msg="Loading containers: done." Mar 14 00:14:52.828117 dockerd[1675]: time="2026-03-14T00:14:52.819414815Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:14:53.221880 dockerd[1675]: time="2026-03-14T00:14:53.014842657Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:14:53.221880 dockerd[1675]: time="2026-03-14T00:14:53.178769489Z" level=info msg="Daemon has completed initialization" Mar 14 00:14:53.872619 dockerd[1675]: time="2026-03-14T00:14:53.872260977Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:14:53.876603 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:14:54.401084 update_engine[1454]: I20260314 00:14:54.391088 1454 update_attempter.cc:509] Updating boot flags... Mar 14 00:14:54.670129 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1836) Mar 14 00:14:55.130237 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1834) Mar 14 00:14:55.284276 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:14:55.315301 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:14:57.871229 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:14:57.876268 (kubelet)[1856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:14:59.888356 containerd[1471]: time="2026-03-14T00:14:59.880659910Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\"" Mar 14 00:15:00.322030 kubelet[1856]: E0314 00:15:00.269110 1856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:15:00.362319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:15:00.362857 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:15:00.363617 systemd[1]: kubelet.service: Consumed 3.407s CPU time. Mar 14 00:15:03.135692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1812861025.mount: Deactivated successfully. Mar 14 00:15:10.682188 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 14 00:15:10.708319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:15:11.718640 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:11.775781 (kubelet)[1934]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:15:12.782065 kubelet[1934]: E0314 00:15:12.781242 1934 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:15:12.789888 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:15:12.790347 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:15:12.791187 systemd[1]: kubelet.service: Consumed 1.907s CPU time. Mar 14 00:15:13.818948 containerd[1471]: time="2026-03-14T00:15:13.816990208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:13.818948 containerd[1471]: time="2026-03-14T00:15:13.817953159Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.2: active requests=0, bytes read=27696467" Mar 14 00:15:13.824215 containerd[1471]: time="2026-03-14T00:15:13.824092821Z" level=info msg="ImageCreate event name:\"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:13.834903 containerd[1471]: time="2026-03-14T00:15:13.834051981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:13.845330 containerd[1471]: time="2026-03-14T00:15:13.845125994Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.2\" with image id \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:68cdc586f13b13edb7aa30a18155be530136a39cfd5ef8672aad8ccc98f0a7f7\", size \"27693066\" in 13.959417559s" Mar 14 00:15:13.845628 containerd[1471]: time="2026-03-14T00:15:13.845591266Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.2\" returns image reference \"sha256:66108468ce51257077e642f2f509cd61d470029036a7954a1a47ca15b2706dda\"" Mar 14 00:15:13.881326 containerd[1471]: time="2026-03-14T00:15:13.878719598Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\"" Mar 14 00:15:23.221778 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 14 00:15:23.369634 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:15:23.564349 containerd[1471]: time="2026-03-14T00:15:23.531991839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:23.564349 containerd[1471]: time="2026-03-14T00:15:23.564603634Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.2: active requests=0, bytes read=21450700" Mar 14 00:15:23.611754 containerd[1471]: time="2026-03-14T00:15:23.608928447Z" level=info msg="ImageCreate event name:\"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:24.005367 containerd[1471]: time="2026-03-14T00:15:24.004393414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:24.029925 containerd[1471]: time="2026-03-14T00:15:24.029671567Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.2\" with image id \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d9784320a41dd1b155c0ad8fdb5823d60c475870f3dd23865edde36b585748f2\", size \"23142311\" in 10.150744906s" Mar 14 00:15:24.030225 containerd[1471]: time="2026-03-14T00:15:24.030085891Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.2\" returns image reference \"sha256:0f2dd35011c05b55a97c9304ae1d36cfd58499cc1fd3dd8ccdf6efef1144e36a\"" Mar 14 00:15:24.090815 containerd[1471]: time="2026-03-14T00:15:24.084088738Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\"" Mar 14 00:15:26.680146 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:26.684735 (kubelet)[1954]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:15:27.301365 kubelet[1954]: E0314 00:15:27.301105 1954 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:15:27.305639 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:15:27.305972 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:15:27.326451 systemd[1]: kubelet.service: Consumed 3.552s CPU time. Mar 14 00:15:31.228571 containerd[1471]: time="2026-03-14T00:15:31.227863292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:31.246145 containerd[1471]: time="2026-03-14T00:15:31.237269999Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.2: active requests=0, bytes read=15548429" Mar 14 00:15:31.247087 containerd[1471]: time="2026-03-14T00:15:31.246977214Z" level=info msg="ImageCreate event name:\"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:31.256915 containerd[1471]: time="2026-03-14T00:15:31.256821436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:31.260304 containerd[1471]: time="2026-03-14T00:15:31.260182038Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.2\" with image id \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5833e2c4b779215efe7a48126c067de199e86aa5a86518693adeef16db0ff943\", size \"17240058\" in 7.175767409s" Mar 14 00:15:31.260631 containerd[1471]: time="2026-03-14T00:15:31.260549263Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.2\" returns image reference \"sha256:ee83c410d7938aa1752b4e79a8d51f03710b4becc23b2e095fba471049fb2914\"" Mar 14 00:15:31.267454 containerd[1471]: time="2026-03-14T00:15:31.267364162Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\"" Mar 14 00:15:35.781330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3522408563.mount: Deactivated successfully. Mar 14 00:15:36.753293 containerd[1471]: time="2026-03-14T00:15:36.752800145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:36.755848 containerd[1471]: time="2026-03-14T00:15:36.754099138Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.2: active requests=0, bytes read=25685312" Mar 14 00:15:36.756788 containerd[1471]: time="2026-03-14T00:15:36.756708652Z" level=info msg="ImageCreate event name:\"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:36.761364 containerd[1471]: time="2026-03-14T00:15:36.761243694Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:36.762796 containerd[1471]: time="2026-03-14T00:15:36.762633776Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.2\" with image id \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\", repo tag \"registry.k8s.io/kube-proxy:v1.35.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:015265214cc874b593a7adccdcfe4ac15d2b8e9ae89881bdcd5bcb99d42e1862\", size \"25684331\" in 5.49478155s" Mar 14 00:15:36.762796 containerd[1471]: time="2026-03-14T00:15:36.762717667Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.2\" returns image reference \"sha256:3c471cf273e44f68c91b48985c27627d581915b9ee5e72f7227bbf2146008b5e\"" Mar 14 00:15:36.766106 containerd[1471]: time="2026-03-14T00:15:36.766036775Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Mar 14 00:15:37.365217 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 14 00:15:37.385003 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:15:37.757816 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:37.771735 (kubelet)[1983]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:15:37.882210 kubelet[1983]: E0314 00:15:37.881865 1983 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:15:37.887462 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:15:37.889106 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:15:38.163718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount238900398.mount: Deactivated successfully. Mar 14 00:15:43.014832 containerd[1471]: time="2026-03-14T00:15:43.011061944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:43.014832 containerd[1471]: time="2026-03-14T00:15:43.013946086Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23556542" Mar 14 00:15:43.055748 containerd[1471]: time="2026-03-14T00:15:43.055109001Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:43.064122 containerd[1471]: time="2026-03-14T00:15:43.063925101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:43.071630 containerd[1471]: time="2026-03-14T00:15:43.068621138Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 6.302502185s" Mar 14 00:15:43.071630 containerd[1471]: time="2026-03-14T00:15:43.068861417Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Mar 14 00:15:43.076687 containerd[1471]: time="2026-03-14T00:15:43.075900554Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 14 00:15:47.485936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1239242838.mount: Deactivated successfully. Mar 14 00:15:47.568625 containerd[1471]: time="2026-03-14T00:15:47.567022074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:47.594004 containerd[1471]: time="2026-03-14T00:15:47.591245927Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 14 00:15:47.796353 containerd[1471]: time="2026-03-14T00:15:47.793888490Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:47.803743 containerd[1471]: time="2026-03-14T00:15:47.801686135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:47.803743 containerd[1471]: time="2026-03-14T00:15:47.802925900Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 4.726972617s" Mar 14 00:15:47.803743 containerd[1471]: time="2026-03-14T00:15:47.803200342Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 14 00:15:47.810747 containerd[1471]: time="2026-03-14T00:15:47.810618315Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Mar 14 00:15:48.115070 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Mar 14 00:15:48.153812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:15:49.087927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount928037427.mount: Deactivated successfully. Mar 14 00:15:49.171364 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:15:49.184800 (kubelet)[2064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:15:50.405018 kubelet[2064]: E0314 00:15:50.404124 2064 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:15:50.410211 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:15:50.410619 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:15:50.411157 systemd[1]: kubelet.service: Consumed 1.904s CPU time. Mar 14 00:15:58.794463 containerd[1471]: time="2026-03-14T00:15:58.794003882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:58.799463 containerd[1471]: time="2026-03-14T00:15:58.796793293Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23630322" Mar 14 00:15:58.801655 containerd[1471]: time="2026-03-14T00:15:58.801189941Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:58.809206 containerd[1471]: time="2026-03-14T00:15:58.809073001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:15:58.820044 containerd[1471]: time="2026-03-14T00:15:58.819846943Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 11.009102202s" Mar 14 00:15:58.820044 containerd[1471]: time="2026-03-14T00:15:58.819998839Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Mar 14 00:16:00.720607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Mar 14 00:16:00.877944 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:02.260621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:02.284334 (kubelet)[2164]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:16:02.553449 kubelet[2164]: E0314 00:16:02.552128 2164 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:16:02.561248 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:16:02.561704 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:16:02.562270 systemd[1]: kubelet.service: Consumed 1.157s CPU time, 97.2M memory peak, 0B memory swap peak. Mar 14 00:16:04.615000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:04.615265 systemd[1]: kubelet.service: Consumed 1.157s CPU time, 97.2M memory peak, 0B memory swap peak. Mar 14 00:16:04.642180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:04.717784 systemd[1]: Reloading requested from client PID 2180 ('systemctl') (unit session-9.scope)... Mar 14 00:16:04.717847 systemd[1]: Reloading... Mar 14 00:16:05.016664 zram_generator::config[2219]: No configuration found. Mar 14 00:16:05.310798 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:16:05.457656 systemd[1]: Reloading finished in 738 ms. Mar 14 00:16:05.593287 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:05.604867 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:16:05.605484 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:05.645049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:06.220850 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:06.272925 (kubelet)[2270]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:16:06.518169 kubelet[2270]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:16:07.255020 kubelet[2270]: I0314 00:16:07.252189 2270 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:16:07.255020 kubelet[2270]: I0314 00:16:07.253118 2270 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:16:07.255020 kubelet[2270]: I0314 00:16:07.253274 2270 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:16:07.255020 kubelet[2270]: I0314 00:16:07.253289 2270 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:16:07.255020 kubelet[2270]: I0314 00:16:07.253908 2270 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:16:07.372050 kubelet[2270]: E0314 00:16:07.371813 2270 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:16:07.376472 kubelet[2270]: I0314 00:16:07.375919 2270 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:16:07.400472 kubelet[2270]: E0314 00:16:07.395271 2270 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:16:07.400472 kubelet[2270]: I0314 00:16:07.395434 2270 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:16:07.418191 kubelet[2270]: I0314 00:16:07.417085 2270 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:16:07.422697 kubelet[2270]: I0314 00:16:07.422435 2270 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:16:07.423647 kubelet[2270]: I0314 00:16:07.422831 2270 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:16:07.423647 kubelet[2270]: I0314 00:16:07.423402 2270 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:16:07.423647 kubelet[2270]: I0314 00:16:07.423427 2270 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:16:07.424223 kubelet[2270]: I0314 00:16:07.423801 2270 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:16:07.449226 kubelet[2270]: I0314 00:16:07.447909 2270 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:16:07.449226 kubelet[2270]: I0314 00:16:07.449099 2270 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:16:07.449226 kubelet[2270]: I0314 00:16:07.449169 2270 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:16:07.449706 kubelet[2270]: I0314 00:16:07.449290 2270 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:16:07.449706 kubelet[2270]: I0314 00:16:07.449312 2270 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:16:07.464282 kubelet[2270]: I0314 00:16:07.462831 2270 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:16:07.494823 kubelet[2270]: I0314 00:16:07.494229 2270 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:16:07.747651 kubelet[2270]: I0314 00:16:07.746227 2270 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:16:07.749733 kubelet[2270]: W0314 00:16:07.748902 2270 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:16:07.778578 kubelet[2270]: I0314 00:16:07.776763 2270 server.go:1257] "Started kubelet" Mar 14 00:16:07.779220 kubelet[2270]: I0314 00:16:07.779079 2270 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:16:07.791642 kubelet[2270]: I0314 00:16:07.786699 2270 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:16:07.791830 kubelet[2270]: I0314 00:16:07.791720 2270 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:16:07.799576 kubelet[2270]: I0314 00:16:07.799461 2270 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:16:07.803273 kubelet[2270]: I0314 00:16:07.802265 2270 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:16:07.803273 kubelet[2270]: I0314 00:16:07.803049 2270 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:16:07.804674 kubelet[2270]: E0314 00:16:07.801425 2270 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189c8cfc1fec6d33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-14 00:16:07.772368179 +0000 UTC m=+1.478699968,LastTimestamp:2026-03-14 00:16:07.772368179 +0000 UTC m=+1.478699968,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 14 00:16:07.807377 kubelet[2270]: I0314 00:16:07.805771 2270 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:16:07.810857 kubelet[2270]: I0314 00:16:07.810791 2270 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:16:07.811483 kubelet[2270]: I0314 00:16:07.811299 2270 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:16:07.811631 kubelet[2270]: E0314 00:16:07.811246 2270 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:16:07.811921 kubelet[2270]: I0314 00:16:07.811790 2270 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:16:07.813696 kubelet[2270]: I0314 00:16:07.813623 2270 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:16:07.813894 kubelet[2270]: I0314 00:16:07.813804 2270 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:16:07.814828 kubelet[2270]: E0314 00:16:07.814366 2270 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:16:07.814987 kubelet[2270]: E0314 00:16:07.814892 2270 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="200ms" Mar 14 00:16:07.820991 kubelet[2270]: I0314 00:16:07.820912 2270 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:16:07.828222 kubelet[2270]: I0314 00:16:07.827283 2270 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:16:07.917737 kubelet[2270]: E0314 00:16:07.913795 2270 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:16:08.089998 kubelet[2270]: E0314 00:16:08.087250 2270 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:16:08.089998 kubelet[2270]: E0314 00:16:08.089430 2270 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="400ms" Mar 14 00:16:08.111824 kubelet[2270]: I0314 00:16:08.111782 2270 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:16:08.112624 kubelet[2270]: I0314 00:16:08.111977 2270 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:16:08.112624 kubelet[2270]: I0314 00:16:08.112057 2270 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:16:08.123919 kubelet[2270]: I0314 00:16:08.123850 2270 policy_none.go:50] "Start" Mar 14 00:16:08.123919 kubelet[2270]: I0314 00:16:08.123910 2270 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:16:08.124145 kubelet[2270]: I0314 00:16:08.123982 2270 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:16:08.139418 kubelet[2270]: I0314 00:16:08.139335 2270 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:16:08.139652 kubelet[2270]: I0314 00:16:08.139463 2270 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:16:08.139693 kubelet[2270]: I0314 00:16:08.139678 2270 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:16:08.140048 kubelet[2270]: E0314 00:16:08.139957 2270 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:16:08.141626 kubelet[2270]: I0314 00:16:08.141462 2270 policy_none.go:44] "Start" Mar 14 00:16:08.163015 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:16:08.188167 kubelet[2270]: E0314 00:16:08.188088 2270 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:16:08.201730 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:16:08.294761 kubelet[2270]: E0314 00:16:08.274345 2270 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 14 00:16:08.294761 kubelet[2270]: E0314 00:16:08.290474 2270 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:16:08.395787 kubelet[2270]: E0314 00:16:08.394593 2270 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:16:08.396920 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:16:08.423441 kubelet[2270]: E0314 00:16:08.420630 2270 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:16:08.423441 kubelet[2270]: I0314 00:16:08.421417 2270 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:16:08.423441 kubelet[2270]: I0314 00:16:08.421470 2270 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:16:08.423892 kubelet[2270]: I0314 00:16:08.423826 2270 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:16:08.448199 kubelet[2270]: E0314 00:16:08.448129 2270 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:16:08.448403 kubelet[2270]: E0314 00:16:08.448302 2270 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 14 00:16:08.579949 kubelet[2270]: E0314 00:16:08.579045 2270 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="800ms" Mar 14 00:16:08.676592 kubelet[2270]: I0314 00:16:08.665065 2270 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:16:08.678711 kubelet[2270]: E0314 00:16:08.677899 2270 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Mar 14 00:16:08.679140 kubelet[2270]: I0314 00:16:08.678934 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ae15148b7c2f617cacab3a209223608-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7ae15148b7c2f617cacab3a209223608\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:16:08.680090 kubelet[2270]: I0314 00:16:08.679461 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ae15148b7c2f617cacab3a209223608-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7ae15148b7c2f617cacab3a209223608\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:16:08.681216 kubelet[2270]: I0314 00:16:08.680424 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ae15148b7c2f617cacab3a209223608-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7ae15148b7c2f617cacab3a209223608\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:16:08.753300 systemd[1]: Created slice kubepods-burstable-pod7ae15148b7c2f617cacab3a209223608.slice - libcontainer container kubepods-burstable-pod7ae15148b7c2f617cacab3a209223608.slice. Mar 14 00:16:08.784134 kubelet[2270]: I0314 00:16:08.782743 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:16:08.789787 kubelet[2270]: I0314 00:16:08.786649 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:16:08.789787 kubelet[2270]: I0314 00:16:08.786795 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:16:08.789787 kubelet[2270]: I0314 00:16:08.786822 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:16:08.789787 kubelet[2270]: I0314 00:16:08.786844 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:16:08.805783 kubelet[2270]: E0314 00:16:08.805669 2270 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:16:08.856195 kubelet[2270]: E0314 00:16:08.826931 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:08.910819 systemd[1]: Created slice kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice - libcontainer container kubepods-burstable-podf420dd303687d038b2bc2fa1d277c55c.slice. Mar 14 00:16:08.912017 kubelet[2270]: I0314 00:16:08.911131 2270 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 14 00:16:08.917133 containerd[1471]: time="2026-03-14T00:16:08.916769429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7ae15148b7c2f617cacab3a209223608,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:08.924186 kubelet[2270]: I0314 00:16:08.922931 2270 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:16:08.924186 kubelet[2270]: E0314 00:16:08.923599 2270 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Mar 14 00:16:08.924186 kubelet[2270]: E0314 00:16:08.924033 2270 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:16:08.950942 systemd[1]: Created slice kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice - libcontainer container kubepods-burstable-podbd81bb6a14e176da833e3a8030ee5eac.slice. Mar 14 00:16:08.961614 kubelet[2270]: E0314 00:16:08.958793 2270 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:16:09.055928 kubelet[2270]: E0314 00:16:09.052072 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:09.100612 containerd[1471]: time="2026-03-14T00:16:09.086672482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:09.273887 kubelet[2270]: E0314 00:16:09.272388 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:09.284433 containerd[1471]: time="2026-03-14T00:16:09.284263738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:09.341040 kubelet[2270]: I0314 00:16:09.339976 2270 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:16:09.341614 kubelet[2270]: E0314 00:16:09.341400 2270 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Mar 14 00:16:09.385744 kubelet[2270]: E0314 00:16:09.382855 2270 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="1.6s" Mar 14 00:16:09.468873 kubelet[2270]: E0314 00:16:09.467891 2270 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:16:10.163028 kubelet[2270]: I0314 00:16:10.162305 2270 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:16:10.163028 kubelet[2270]: E0314 00:16:10.164051 2270 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Mar 14 00:16:10.217658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1567924883.mount: Deactivated successfully. Mar 14 00:16:10.236039 containerd[1471]: time="2026-03-14T00:16:10.235924932Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:10.245057 containerd[1471]: time="2026-03-14T00:16:10.244906598Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 14 00:16:10.247692 containerd[1471]: time="2026-03-14T00:16:10.247473012Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:10.250913 containerd[1471]: time="2026-03-14T00:16:10.250851327Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:16:10.254468 containerd[1471]: time="2026-03-14T00:16:10.254301199Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:10.259210 containerd[1471]: time="2026-03-14T00:16:10.259139244Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:10.263370 containerd[1471]: time="2026-03-14T00:16:10.261739714Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:16:10.268387 containerd[1471]: time="2026-03-14T00:16:10.268263192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:16:10.282247 containerd[1471]: time="2026-03-14T00:16:10.280396294Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 995.941489ms" Mar 14 00:16:10.285254 containerd[1471]: time="2026-03-14T00:16:10.284797839Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.365643368s" Mar 14 00:16:10.289201 containerd[1471]: time="2026-03-14T00:16:10.287361988Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.178207872s" Mar 14 00:16:10.986394 kubelet[2270]: E0314 00:16:10.985741 2270 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="3.2s" Mar 14 00:16:11.774763 containerd[1471]: time="2026-03-14T00:16:11.765176152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:11.774763 containerd[1471]: time="2026-03-14T00:16:11.765292337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:11.774763 containerd[1471]: time="2026-03-14T00:16:11.765311211Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:11.774763 containerd[1471]: time="2026-03-14T00:16:11.765698749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:11.774763 containerd[1471]: time="2026-03-14T00:16:11.763328350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:11.774763 containerd[1471]: time="2026-03-14T00:16:11.770386285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:11.774763 containerd[1471]: time="2026-03-14T00:16:11.770408455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:11.781115 containerd[1471]: time="2026-03-14T00:16:11.775143288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:11.787579 kubelet[2270]: I0314 00:16:11.787440 2270 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:16:11.788281 kubelet[2270]: E0314 00:16:11.788142 2270 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Mar 14 00:16:11.807065 containerd[1471]: time="2026-03-14T00:16:11.806846912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:11.807336 containerd[1471]: time="2026-03-14T00:16:11.807037190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:11.807336 containerd[1471]: time="2026-03-14T00:16:11.807052717Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:11.808087 containerd[1471]: time="2026-03-14T00:16:11.807841788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:12.088092 systemd[1]: Started cri-containerd-2fe50cfba749d06ba87e0eddc458be341190a714787fc33f6430fc46bfa8f4fd.scope - libcontainer container 2fe50cfba749d06ba87e0eddc458be341190a714787fc33f6430fc46bfa8f4fd. Mar 14 00:16:12.470759 systemd[1]: Started cri-containerd-bdf0671e0a7727593d464149c97eee18bae778a2e93224841376ad75d09e0553.scope - libcontainer container bdf0671e0a7727593d464149c97eee18bae778a2e93224841376ad75d09e0553. Mar 14 00:16:12.484410 systemd[1]: Started cri-containerd-dc4927fa076429b06195976bdf5bde72c6d112dd17c8b011c20451409493f569.scope - libcontainer container dc4927fa076429b06195976bdf5bde72c6d112dd17c8b011c20451409493f569. Mar 14 00:16:13.086921 containerd[1471]: time="2026-03-14T00:16:13.085431906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:bd81bb6a14e176da833e3a8030ee5eac,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdf0671e0a7727593d464149c97eee18bae778a2e93224841376ad75d09e0553\"" Mar 14 00:16:13.090334 kubelet[2270]: E0314 00:16:13.090205 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:13.116787 containerd[1471]: time="2026-03-14T00:16:13.111904150Z" level=info msg="CreateContainer within sandbox \"bdf0671e0a7727593d464149c97eee18bae778a2e93224841376ad75d09e0553\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:16:13.124267 containerd[1471]: time="2026-03-14T00:16:13.123274349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:7ae15148b7c2f617cacab3a209223608,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fe50cfba749d06ba87e0eddc458be341190a714787fc33f6430fc46bfa8f4fd\"" Mar 14 00:16:13.125391 kubelet[2270]: E0314 00:16:13.125067 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:13.158245 containerd[1471]: time="2026-03-14T00:16:13.158013428Z" level=info msg="CreateContainer within sandbox \"2fe50cfba749d06ba87e0eddc458be341190a714787fc33f6430fc46bfa8f4fd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:16:13.167819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1594767503.mount: Deactivated successfully. Mar 14 00:16:13.178631 containerd[1471]: time="2026-03-14T00:16:13.178296497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:f420dd303687d038b2bc2fa1d277c55c,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc4927fa076429b06195976bdf5bde72c6d112dd17c8b011c20451409493f569\"" Mar 14 00:16:13.181449 kubelet[2270]: E0314 00:16:13.180355 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:13.185462 containerd[1471]: time="2026-03-14T00:16:13.185363898Z" level=info msg="CreateContainer within sandbox \"bdf0671e0a7727593d464149c97eee18bae778a2e93224841376ad75d09e0553\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c2d8936f3abdd262db8be04806b403de00d4f5d6db53182950d28b1df10ddb0e\"" Mar 14 00:16:13.187648 containerd[1471]: time="2026-03-14T00:16:13.187348299Z" level=info msg="StartContainer for \"c2d8936f3abdd262db8be04806b403de00d4f5d6db53182950d28b1df10ddb0e\"" Mar 14 00:16:13.193067 containerd[1471]: time="2026-03-14T00:16:13.193018620Z" level=info msg="CreateContainer within sandbox \"dc4927fa076429b06195976bdf5bde72c6d112dd17c8b011c20451409493f569\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:16:13.255313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2948057079.mount: Deactivated successfully. Mar 14 00:16:13.282279 containerd[1471]: time="2026-03-14T00:16:13.281109599Z" level=info msg="CreateContainer within sandbox \"2fe50cfba749d06ba87e0eddc458be341190a714787fc33f6430fc46bfa8f4fd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a649195bec4c1d9a76178d03a72c0b9520385eb547330ee0e9abe92eec27f1a0\"" Mar 14 00:16:13.283185 containerd[1471]: time="2026-03-14T00:16:13.283083065Z" level=info msg="StartContainer for \"a649195bec4c1d9a76178d03a72c0b9520385eb547330ee0e9abe92eec27f1a0\"" Mar 14 00:16:13.287395 containerd[1471]: time="2026-03-14T00:16:13.287356775Z" level=info msg="CreateContainer within sandbox \"dc4927fa076429b06195976bdf5bde72c6d112dd17c8b011c20451409493f569\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"feaacbe04f9d93f84c1461ff3f6778c28726e1a32812c828b94c254818fd96b7\"" Mar 14 00:16:13.289882 containerd[1471]: time="2026-03-14T00:16:13.289784671Z" level=info msg="StartContainer for \"feaacbe04f9d93f84c1461ff3f6778c28726e1a32812c828b94c254818fd96b7\"" Mar 14 00:16:13.367884 systemd[1]: Started cri-containerd-c2d8936f3abdd262db8be04806b403de00d4f5d6db53182950d28b1df10ddb0e.scope - libcontainer container c2d8936f3abdd262db8be04806b403de00d4f5d6db53182950d28b1df10ddb0e. Mar 14 00:16:13.590123 systemd[1]: Started cri-containerd-a649195bec4c1d9a76178d03a72c0b9520385eb547330ee0e9abe92eec27f1a0.scope - libcontainer container a649195bec4c1d9a76178d03a72c0b9520385eb547330ee0e9abe92eec27f1a0. Mar 14 00:16:13.616031 systemd[1]: Started cri-containerd-feaacbe04f9d93f84c1461ff3f6778c28726e1a32812c828b94c254818fd96b7.scope - libcontainer container feaacbe04f9d93f84c1461ff3f6778c28726e1a32812c828b94c254818fd96b7. Mar 14 00:16:13.973455 kubelet[2270]: E0314 00:16:13.973154 2270 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:16:14.256196 kubelet[2270]: E0314 00:16:14.254980 2270 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="6.4s" Mar 14 00:16:14.294158 containerd[1471]: time="2026-03-14T00:16:14.292165832Z" level=info msg="StartContainer for \"c2d8936f3abdd262db8be04806b403de00d4f5d6db53182950d28b1df10ddb0e\" returns successfully" Mar 14 00:16:14.294158 containerd[1471]: time="2026-03-14T00:16:14.292595577Z" level=info msg="StartContainer for \"a649195bec4c1d9a76178d03a72c0b9520385eb547330ee0e9abe92eec27f1a0\" returns successfully" Mar 14 00:16:14.493699 containerd[1471]: time="2026-03-14T00:16:14.490447791Z" level=info msg="StartContainer for \"feaacbe04f9d93f84c1461ff3f6778c28726e1a32812c828b94c254818fd96b7\" returns successfully" Mar 14 00:16:15.006229 kubelet[2270]: I0314 00:16:15.003437 2270 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:16:15.006229 kubelet[2270]: E0314 00:16:15.005133 2270 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Mar 14 00:16:15.018659 kubelet[2270]: E0314 00:16:15.018383 2270 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:16:15.026744 kubelet[2270]: E0314 00:16:15.023468 2270 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:16:15.031265 kubelet[2270]: E0314 00:16:15.029229 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:15.072280 kubelet[2270]: E0314 00:16:15.071159 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:15.080081 kubelet[2270]: E0314 00:16:15.079973 2270 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:16:15.080781 kubelet[2270]: E0314 00:16:15.080406 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:16.142743 kubelet[2270]: E0314 00:16:16.142200 2270 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:16:16.142743 kubelet[2270]: E0314 00:16:16.142478 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:16.149652 kubelet[2270]: E0314 00:16:16.149413 2270 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:16:16.150379 kubelet[2270]: E0314 00:16:16.149758 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:16.150379 kubelet[2270]: E0314 00:16:16.150202 2270 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:16:16.150379 kubelet[2270]: E0314 00:16:16.150351 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:17.156356 kubelet[2270]: E0314 00:16:17.155848 2270 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:16:17.156356 kubelet[2270]: E0314 00:16:17.156117 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:17.156356 kubelet[2270]: E0314 00:16:17.156684 2270 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:16:17.163390 kubelet[2270]: E0314 00:16:17.156879 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:18.566042 kubelet[2270]: E0314 00:16:18.565523 2270 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 14 00:16:18.578200 kubelet[2270]: E0314 00:16:18.578172 2270 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:16:18.578475 kubelet[2270]: E0314 00:16:18.578329 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:19.908414 kubelet[2270]: E0314 00:16:19.905733 2270 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:16:19.908414 kubelet[2270]: E0314 00:16:19.906102 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:20.715418 kubelet[2270]: E0314 00:16:20.715164 2270 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 14 00:16:21.049421 kubelet[2270]: E0314 00:16:21.045266 2270 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 14 00:16:21.416287 kubelet[2270]: I0314 00:16:21.415827 2270 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:16:21.438382 kubelet[2270]: I0314 00:16:21.438270 2270 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 14 00:16:21.438382 kubelet[2270]: E0314 00:16:21.438358 2270 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 14 00:16:21.514300 kubelet[2270]: I0314 00:16:21.514020 2270 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:16:21.564428 kubelet[2270]: I0314 00:16:21.564258 2270 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:16:21.574311 kubelet[2270]: I0314 00:16:21.574106 2270 apiserver.go:52] "Watching apiserver" Mar 14 00:16:21.580999 kubelet[2270]: I0314 00:16:21.580728 2270 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:16:21.585437 kubelet[2270]: E0314 00:16:21.584610 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:21.585437 kubelet[2270]: E0314 00:16:21.585273 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:21.646190 kubelet[2270]: I0314 00:16:21.625145 2270 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:16:21.652241 kubelet[2270]: E0314 00:16:21.651031 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:21.670475 kubelet[2270]: E0314 00:16:21.668304 2270 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:25.708655 systemd[1]: Reloading requested from client PID 2563 ('systemctl') (unit session-9.scope)... Mar 14 00:16:25.708713 systemd[1]: Reloading... Mar 14 00:16:25.890614 zram_generator::config[2605]: No configuration found. Mar 14 00:16:26.082447 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:16:26.250014 systemd[1]: Reloading finished in 540 ms. Mar 14 00:16:26.329617 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:26.358751 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:16:26.359715 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:26.359879 systemd[1]: kubelet.service: Consumed 9.480s CPU time, 130.7M memory peak, 0B memory swap peak. Mar 14 00:16:26.374396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:16:26.661645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:16:26.679427 (kubelet)[2647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:16:26.884892 kubelet[2647]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:16:26.898816 kubelet[2647]: I0314 00:16:26.897963 2647 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Mar 14 00:16:26.898942 kubelet[2647]: I0314 00:16:26.898841 2647 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:16:26.898942 kubelet[2647]: I0314 00:16:26.898877 2647 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:16:26.898942 kubelet[2647]: I0314 00:16:26.898886 2647 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:16:26.899452 kubelet[2647]: I0314 00:16:26.899353 2647 server.go:951] "Client rotation is on, will bootstrap in background" Mar 14 00:16:26.901830 kubelet[2647]: I0314 00:16:26.901741 2647 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:16:26.907199 kubelet[2647]: I0314 00:16:26.905997 2647 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:16:26.913465 kubelet[2647]: E0314 00:16:26.913274 2647 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:16:26.913465 kubelet[2647]: I0314 00:16:26.913354 2647 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:16:26.923303 kubelet[2647]: I0314 00:16:26.923111 2647 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:16:26.923666 kubelet[2647]: I0314 00:16:26.923610 2647 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:16:26.924156 kubelet[2647]: I0314 00:16:26.923642 2647 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:16:26.924156 kubelet[2647]: I0314 00:16:26.923883 2647 topology_manager.go:143] "Creating topology manager with none policy" Mar 14 00:16:26.924156 kubelet[2647]: I0314 00:16:26.923896 2647 container_manager_linux.go:308] "Creating device plugin manager" Mar 14 00:16:26.924156 kubelet[2647]: I0314 00:16:26.923921 2647 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:16:26.926113 kubelet[2647]: I0314 00:16:26.924927 2647 state_mem.go:41] "Initialized" logger="CPUManager state memory" Mar 14 00:16:26.926113 kubelet[2647]: I0314 00:16:26.925928 2647 kubelet.go:482] "Attempting to sync node with API server" Mar 14 00:16:26.926113 kubelet[2647]: I0314 00:16:26.925942 2647 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:16:26.926113 kubelet[2647]: I0314 00:16:26.925962 2647 kubelet.go:394] "Adding apiserver pod source" Mar 14 00:16:26.926113 kubelet[2647]: I0314 00:16:26.925971 2647 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:16:26.930093 kubelet[2647]: I0314 00:16:26.929974 2647 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:16:26.931633 kubelet[2647]: I0314 00:16:26.931245 2647 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:16:26.931633 kubelet[2647]: I0314 00:16:26.931282 2647 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:16:26.943121 kubelet[2647]: I0314 00:16:26.943091 2647 server.go:1257] "Started kubelet" Mar 14 00:16:26.946609 kubelet[2647]: I0314 00:16:26.946433 2647 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Mar 14 00:16:26.953886 kubelet[2647]: I0314 00:16:26.953846 2647 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:16:26.959394 kubelet[2647]: I0314 00:16:26.959292 2647 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:16:26.959600 kubelet[2647]: I0314 00:16:26.959426 2647 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:16:26.960038 kubelet[2647]: I0314 00:16:26.959958 2647 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:16:26.962470 kubelet[2647]: I0314 00:16:26.962372 2647 volume_manager.go:311] "Starting Kubelet Volume Manager" Mar 14 00:16:26.963195 kubelet[2647]: I0314 00:16:26.963025 2647 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:16:26.963465 kubelet[2647]: I0314 00:16:26.963435 2647 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:16:26.969557 kubelet[2647]: I0314 00:16:26.969341 2647 server.go:317] "Adding debug handlers to kubelet server" Mar 14 00:16:26.971024 kubelet[2647]: I0314 00:16:26.970316 2647 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:16:26.972691 kubelet[2647]: I0314 00:16:26.972369 2647 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:16:26.980639 kubelet[2647]: I0314 00:16:26.979656 2647 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:16:26.988298 kubelet[2647]: I0314 00:16:26.988265 2647 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:16:26.992713 kubelet[2647]: E0314 00:16:26.990353 2647 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:16:27.006704 kubelet[2647]: I0314 00:16:27.006589 2647 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:16:27.015384 sudo[2677]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 14 00:16:27.016319 sudo[2677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 14 00:16:27.020233 kubelet[2647]: I0314 00:16:27.020099 2647 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:16:27.020233 kubelet[2647]: I0314 00:16:27.020121 2647 status_manager.go:249] "Starting to sync pod status with apiserver" Mar 14 00:16:27.020233 kubelet[2647]: I0314 00:16:27.020145 2647 kubelet.go:2501] "Starting kubelet main sync loop" Mar 14 00:16:27.020694 kubelet[2647]: E0314 00:16:27.020442 2647 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:16:27.077209 kubelet[2647]: I0314 00:16:27.077109 2647 cpu_manager.go:225] "Starting" policy="none" Mar 14 00:16:27.077209 kubelet[2647]: I0314 00:16:27.077218 2647 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 14 00:16:27.077398 kubelet[2647]: I0314 00:16:27.077240 2647 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Mar 14 00:16:27.078468 kubelet[2647]: I0314 00:16:27.077579 2647 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Mar 14 00:16:27.078468 kubelet[2647]: I0314 00:16:27.077600 2647 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Mar 14 00:16:27.078468 kubelet[2647]: I0314 00:16:27.077619 2647 policy_none.go:50] "Start" Mar 14 00:16:27.078468 kubelet[2647]: I0314 00:16:27.077629 2647 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:16:27.078468 kubelet[2647]: I0314 00:16:27.077642 2647 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:16:27.078468 kubelet[2647]: I0314 00:16:27.077861 2647 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 14 00:16:27.078468 kubelet[2647]: I0314 00:16:27.077874 2647 policy_none.go:44] "Start" Mar 14 00:16:27.095835 kubelet[2647]: E0314 00:16:27.095735 2647 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:16:27.096221 kubelet[2647]: I0314 00:16:27.096154 2647 eviction_manager.go:194] "Eviction manager: starting control loop" Mar 14 00:16:27.096221 kubelet[2647]: I0314 00:16:27.096199 2647 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:16:27.096793 kubelet[2647]: I0314 00:16:27.096452 2647 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Mar 14 00:16:27.102032 kubelet[2647]: E0314 00:16:27.101958 2647 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:16:27.128609 kubelet[2647]: I0314 00:16:27.127227 2647 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:16:27.128609 kubelet[2647]: I0314 00:16:27.127869 2647 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:16:27.128609 kubelet[2647]: I0314 00:16:27.127882 2647 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:16:27.141383 kubelet[2647]: E0314 00:16:27.141236 2647 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 14 00:16:27.142985 kubelet[2647]: E0314 00:16:27.142866 2647 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 14 00:16:27.143865 kubelet[2647]: E0314 00:16:27.143741 2647 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:16:27.208959 kubelet[2647]: I0314 00:16:27.208653 2647 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Mar 14 00:16:27.225859 kubelet[2647]: I0314 00:16:27.225728 2647 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Mar 14 00:16:27.226020 kubelet[2647]: I0314 00:16:27.225933 2647 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Mar 14 00:16:27.267369 kubelet[2647]: I0314 00:16:27.267036 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ae15148b7c2f617cacab3a209223608-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"7ae15148b7c2f617cacab3a209223608\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:16:27.267369 kubelet[2647]: I0314 00:16:27.267157 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:16:27.267369 kubelet[2647]: I0314 00:16:27.267292 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:16:27.267369 kubelet[2647]: I0314 00:16:27.267323 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:16:27.267369 kubelet[2647]: I0314 00:16:27.267352 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/bd81bb6a14e176da833e3a8030ee5eac-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"bd81bb6a14e176da833e3a8030ee5eac\") " pod="kube-system/kube-scheduler-localhost" Mar 14 00:16:27.267907 kubelet[2647]: I0314 00:16:27.267376 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ae15148b7c2f617cacab3a209223608-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"7ae15148b7c2f617cacab3a209223608\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:16:27.267907 kubelet[2647]: I0314 00:16:27.267397 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ae15148b7c2f617cacab3a209223608-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"7ae15148b7c2f617cacab3a209223608\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:16:27.267907 kubelet[2647]: I0314 00:16:27.267424 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:16:27.267907 kubelet[2647]: I0314 00:16:27.267448 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f420dd303687d038b2bc2fa1d277c55c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"f420dd303687d038b2bc2fa1d277c55c\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:16:27.442106 kubelet[2647]: E0314 00:16:27.441967 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:27.444218 kubelet[2647]: E0314 00:16:27.444122 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:27.444595 kubelet[2647]: E0314 00:16:27.444406 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:27.987330 kubelet[2647]: I0314 00:16:27.968044 2647 apiserver.go:52] "Watching apiserver" Mar 14 00:16:28.500794 kubelet[2647]: I0314 00:16:28.500300 2647 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:16:28.501409 kubelet[2647]: I0314 00:16:28.501172 2647 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:16:28.503484 kubelet[2647]: I0314 00:16:28.503200 2647 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:16:28.856179 kubelet[2647]: E0314 00:16:28.852581 2647 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 14 00:16:28.856179 kubelet[2647]: E0314 00:16:28.853380 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:28.856179 kubelet[2647]: E0314 00:16:28.854821 2647 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:16:28.856179 kubelet[2647]: E0314 00:16:28.855130 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:28.861607 kubelet[2647]: E0314 00:16:28.859803 2647 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 14 00:16:28.861607 kubelet[2647]: E0314 00:16:28.860060 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:28.864847 kubelet[2647]: I0314 00:16:28.863966 2647 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:16:29.562860 kubelet[2647]: E0314 00:16:29.562766 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:29.567565 kubelet[2647]: E0314 00:16:29.565855 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:29.567565 kubelet[2647]: E0314 00:16:29.566843 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:30.483901 kubelet[2647]: I0314 00:16:30.482300 2647 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:16:30.483901 kubelet[2647]: I0314 00:16:30.483941 2647 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:16:30.491481 containerd[1471]: time="2026-03-14T00:16:30.483148868Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:16:30.786029 sudo[2677]: pam_unix(sudo:session): session closed for user root Mar 14 00:16:32.183840 kubelet[2647]: I0314 00:16:32.142733 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e21bcb54-209b-4b4f-b0c1-767ec4eff7ba-lib-modules\") pod \"kube-proxy-dctdg\" (UID: \"e21bcb54-209b-4b4f-b0c1-767ec4eff7ba\") " pod="kube-system/kube-proxy-dctdg" Mar 14 00:16:32.243178 kubelet[2647]: I0314 00:16:32.243065 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e21bcb54-209b-4b4f-b0c1-767ec4eff7ba-kube-proxy\") pod \"kube-proxy-dctdg\" (UID: \"e21bcb54-209b-4b4f-b0c1-767ec4eff7ba\") " pod="kube-system/kube-proxy-dctdg" Mar 14 00:16:32.243382 kubelet[2647]: I0314 00:16:32.243190 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx4gq\" (UniqueName: \"kubernetes.io/projected/e21bcb54-209b-4b4f-b0c1-767ec4eff7ba-kube-api-access-bx4gq\") pod \"kube-proxy-dctdg\" (UID: \"e21bcb54-209b-4b4f-b0c1-767ec4eff7ba\") " pod="kube-system/kube-proxy-dctdg" Mar 14 00:16:32.243382 kubelet[2647]: I0314 00:16:32.243283 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e21bcb54-209b-4b4f-b0c1-767ec4eff7ba-xtables-lock\") pod \"kube-proxy-dctdg\" (UID: \"e21bcb54-209b-4b4f-b0c1-767ec4eff7ba\") " pod="kube-system/kube-proxy-dctdg" Mar 14 00:16:32.256771 systemd[1]: Created slice kubepods-besteffort-pode21bcb54_209b_4b4f_b0c1_767ec4eff7ba.slice - libcontainer container kubepods-besteffort-pode21bcb54_209b_4b4f_b0c1_767ec4eff7ba.slice. Mar 14 00:16:32.308605 kubelet[2647]: I0314 00:16:32.308289 2647 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=11.308179788 podStartE2EDuration="11.308179788s" podCreationTimestamp="2026-03-14 00:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:16:32.268639727 +0000 UTC m=+5.571682798" watchObservedRunningTime="2026-03-14 00:16:32.308179788 +0000 UTC m=+5.611222869" Mar 14 00:16:32.575079 kubelet[2647]: I0314 00:16:32.574151 2647 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=11.573945262 podStartE2EDuration="11.573945262s" podCreationTimestamp="2026-03-14 00:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:16:32.491631901 +0000 UTC m=+5.794674982" watchObservedRunningTime="2026-03-14 00:16:32.573945262 +0000 UTC m=+5.876988482" Mar 14 00:16:32.575079 kubelet[2647]: I0314 00:16:32.574577 2647 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=11.574567141 podStartE2EDuration="11.574567141s" podCreationTimestamp="2026-03-14 00:16:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:16:32.557922684 +0000 UTC m=+5.860965756" watchObservedRunningTime="2026-03-14 00:16:32.574567141 +0000 UTC m=+5.877610211" Mar 14 00:16:32.698862 kubelet[2647]: E0314 00:16:32.695885 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:32.706642 containerd[1471]: time="2026-03-14T00:16:32.699766037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dctdg,Uid:e21bcb54-209b-4b4f-b0c1-767ec4eff7ba,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:33.190759 containerd[1471]: time="2026-03-14T00:16:33.182325515Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:33.190759 containerd[1471]: time="2026-03-14T00:16:33.187052202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:33.190759 containerd[1471]: time="2026-03-14T00:16:33.187097394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:33.190759 containerd[1471]: time="2026-03-14T00:16:33.187779642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:34.171480 systemd[1]: run-containerd-runc-k8s.io-7a976b40068bba9a05be362fb38118b9973b547ad968a3fe7459944b720308d1-runc.GLSEky.mount: Deactivated successfully. Mar 14 00:16:34.276028 systemd[1]: Started cri-containerd-7a976b40068bba9a05be362fb38118b9973b547ad968a3fe7459944b720308d1.scope - libcontainer container 7a976b40068bba9a05be362fb38118b9973b547ad968a3fe7459944b720308d1. Mar 14 00:16:35.173676 containerd[1471]: time="2026-03-14T00:16:35.173205207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dctdg,Uid:e21bcb54-209b-4b4f-b0c1-767ec4eff7ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a976b40068bba9a05be362fb38118b9973b547ad968a3fe7459944b720308d1\"" Mar 14 00:16:35.222806 kubelet[2647]: E0314 00:16:35.178074 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:35.306763 containerd[1471]: time="2026-03-14T00:16:35.306103945Z" level=info msg="CreateContainer within sandbox \"7a976b40068bba9a05be362fb38118b9973b547ad968a3fe7459944b720308d1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:16:35.474018 kubelet[2647]: I0314 00:16:35.473432 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-bpf-maps\") pod \"cilium-bmf9k\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " pod="kube-system/cilium-bmf9k" Mar 14 00:16:35.474018 kubelet[2647]: I0314 00:16:35.473732 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d26d8103-c5fe-4eb0-88b3-058ee820f281-hubble-tls\") pod \"cilium-bmf9k\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " pod="kube-system/cilium-bmf9k" Mar 14 00:16:35.474018 kubelet[2647]: I0314 00:16:35.473775 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj9n8\" (UniqueName: \"kubernetes.io/projected/fd7934fa-f75a-471e-9a55-ea1d310c5afc-kube-api-access-sj9n8\") pod \"cilium-operator-78cf5644cb-rtkdl\" (UID: \"fd7934fa-f75a-471e-9a55-ea1d310c5afc\") " pod="kube-system/cilium-operator-78cf5644cb-rtkdl" Mar 14 00:16:35.489631 kubelet[2647]: I0314 00:16:35.476343 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-cilium-cgroup\") pod \"cilium-bmf9k\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " pod="kube-system/cilium-bmf9k" Mar 14 00:16:35.489631 kubelet[2647]: I0314 00:16:35.476385 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-cni-path\") pod \"cilium-bmf9k\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " pod="kube-system/cilium-bmf9k" Mar 14 00:16:35.489631 kubelet[2647]: I0314 00:16:35.476411 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd7934fa-f75a-471e-9a55-ea1d310c5afc-cilium-config-path\") pod \"cilium-operator-78cf5644cb-rtkdl\" (UID: \"fd7934fa-f75a-471e-9a55-ea1d310c5afc\") " pod="kube-system/cilium-operator-78cf5644cb-rtkdl" Mar 14 00:16:35.489631 kubelet[2647]: I0314 00:16:35.476438 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-host-proc-sys-kernel\") pod \"cilium-bmf9k\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " pod="kube-system/cilium-bmf9k" Mar 14 00:16:35.489631 kubelet[2647]: I0314 00:16:35.476460 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-lib-modules\") pod \"cilium-bmf9k\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " pod="kube-system/cilium-bmf9k" Mar 14 00:16:35.489950 kubelet[2647]: I0314 00:16:35.476483 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d26d8103-c5fe-4eb0-88b3-058ee820f281-clustermesh-secrets\") pod \"cilium-bmf9k\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " pod="kube-system/cilium-bmf9k" Mar 14 00:16:35.493744 kubelet[2647]: I0314 00:16:35.490442 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-etc-cni-netd\") pod \"cilium-bmf9k\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " pod="kube-system/cilium-bmf9k" Mar 14 00:16:35.493744 kubelet[2647]: I0314 00:16:35.490576 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d26d8103-c5fe-4eb0-88b3-058ee820f281-cilium-config-path\") pod \"cilium-bmf9k\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " pod="kube-system/cilium-bmf9k" Mar 14 00:16:35.493744 kubelet[2647]: I0314 00:16:35.490601 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-host-proc-sys-net\") pod \"cilium-bmf9k\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " pod="kube-system/cilium-bmf9k" Mar 14 00:16:35.493744 kubelet[2647]: I0314 00:16:35.490625 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-xtables-lock\") pod \"cilium-bmf9k\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " pod="kube-system/cilium-bmf9k" Mar 14 00:16:35.493744 kubelet[2647]: I0314 00:16:35.490674 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4rhl\" (UniqueName: \"kubernetes.io/projected/d26d8103-c5fe-4eb0-88b3-058ee820f281-kube-api-access-z4rhl\") pod \"cilium-bmf9k\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " pod="kube-system/cilium-bmf9k" Mar 14 00:16:35.493744 kubelet[2647]: I0314 00:16:35.490701 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-cilium-run\") pod \"cilium-bmf9k\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " pod="kube-system/cilium-bmf9k" Mar 14 00:16:35.494071 kubelet[2647]: I0314 00:16:35.490720 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-hostproc\") pod \"cilium-bmf9k\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " pod="kube-system/cilium-bmf9k" Mar 14 00:16:35.509660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2299573935.mount: Deactivated successfully. Mar 14 00:16:35.549608 systemd[1]: Created slice kubepods-besteffort-podfd7934fa_f75a_471e_9a55_ea1d310c5afc.slice - libcontainer container kubepods-besteffort-podfd7934fa_f75a_471e_9a55_ea1d310c5afc.slice. Mar 14 00:16:35.550717 containerd[1471]: time="2026-03-14T00:16:35.550215821Z" level=info msg="CreateContainer within sandbox \"7a976b40068bba9a05be362fb38118b9973b547ad968a3fe7459944b720308d1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"96f3165e56ad3bd75c6e0c207f0e04166259381d09af1da392122970e857a043\"" Mar 14 00:16:35.560585 containerd[1471]: time="2026-03-14T00:16:35.559827268Z" level=info msg="StartContainer for \"96f3165e56ad3bd75c6e0c207f0e04166259381d09af1da392122970e857a043\"" Mar 14 00:16:35.566438 systemd[1]: Created slice kubepods-burstable-podd26d8103_c5fe_4eb0_88b3_058ee820f281.slice - libcontainer container kubepods-burstable-podd26d8103_c5fe_4eb0_88b3_058ee820f281.slice. Mar 14 00:16:35.797135 systemd[1]: Started cri-containerd-96f3165e56ad3bd75c6e0c207f0e04166259381d09af1da392122970e857a043.scope - libcontainer container 96f3165e56ad3bd75c6e0c207f0e04166259381d09af1da392122970e857a043. Mar 14 00:16:36.017817 kubelet[2647]: E0314 00:16:36.016745 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:36.054976 kubelet[2647]: E0314 00:16:36.053816 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:36.062126 containerd[1471]: time="2026-03-14T00:16:36.054610292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-rtkdl,Uid:fd7934fa-f75a-471e-9a55-ea1d310c5afc,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:36.075161 containerd[1471]: time="2026-03-14T00:16:36.074837965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bmf9k,Uid:d26d8103-c5fe-4eb0-88b3-058ee820f281,Namespace:kube-system,Attempt:0,}" Mar 14 00:16:36.417941 containerd[1471]: time="2026-03-14T00:16:36.412370050Z" level=info msg="StartContainer for \"96f3165e56ad3bd75c6e0c207f0e04166259381d09af1da392122970e857a043\" returns successfully" Mar 14 00:16:36.454329 containerd[1471]: time="2026-03-14T00:16:36.452139491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:36.454329 containerd[1471]: time="2026-03-14T00:16:36.452436115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:36.454329 containerd[1471]: time="2026-03-14T00:16:36.452574334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:36.454329 containerd[1471]: time="2026-03-14T00:16:36.451716120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:16:36.454329 containerd[1471]: time="2026-03-14T00:16:36.451802035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:16:36.454329 containerd[1471]: time="2026-03-14T00:16:36.451886898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:36.454329 containerd[1471]: time="2026-03-14T00:16:36.452682532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:36.454986 containerd[1471]: time="2026-03-14T00:16:36.454741289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:16:36.913446 systemd[1]: run-containerd-runc-k8s.io-b7d5abff8c0371fcb817365f81d6bdfef7abee713ab5c30aca50ea91f231550e-runc.feHLbd.mount: Deactivated successfully. Mar 14 00:16:36.970877 systemd[1]: Started cri-containerd-6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47.scope - libcontainer container 6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47. Mar 14 00:16:37.027358 systemd[1]: Started cri-containerd-b7d5abff8c0371fcb817365f81d6bdfef7abee713ab5c30aca50ea91f231550e.scope - libcontainer container b7d5abff8c0371fcb817365f81d6bdfef7abee713ab5c30aca50ea91f231550e. Mar 14 00:16:37.087777 kubelet[2647]: E0314 00:16:37.087645 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:37.244670 kubelet[2647]: I0314 00:16:37.242404 2647 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-dctdg" podStartSLOduration=6.242377493 podStartE2EDuration="6.242377493s" podCreationTimestamp="2026-03-14 00:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:16:37.241774289 +0000 UTC m=+10.544817381" watchObservedRunningTime="2026-03-14 00:16:37.242377493 +0000 UTC m=+10.545420574" Mar 14 00:16:37.382076 containerd[1471]: time="2026-03-14T00:16:37.381311542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bmf9k,Uid:d26d8103-c5fe-4eb0-88b3-058ee820f281,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47\"" Mar 14 00:16:37.857227 kubelet[2647]: E0314 00:16:37.856483 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:37.897810 containerd[1471]: time="2026-03-14T00:16:37.897036320Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 14 00:16:38.482582 kubelet[2647]: E0314 00:16:38.481410 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:38.548354 containerd[1471]: time="2026-03-14T00:16:38.548234887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-78cf5644cb-rtkdl,Uid:fd7934fa-f75a-471e-9a55-ea1d310c5afc,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7d5abff8c0371fcb817365f81d6bdfef7abee713ab5c30aca50ea91f231550e\"" Mar 14 00:16:38.550816 kubelet[2647]: E0314 00:16:38.550773 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:16:52.192567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3493470818.mount: Deactivated successfully. Mar 14 00:16:59.387371 containerd[1471]: time="2026-03-14T00:16:59.386748879Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:59.390078 containerd[1471]: time="2026-03-14T00:16:59.389740889Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 14 00:16:59.392358 containerd[1471]: time="2026-03-14T00:16:59.392019289Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:16:59.398266 containerd[1471]: time="2026-03-14T00:16:59.398150710Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 21.500981952s" Mar 14 00:16:59.398266 containerd[1471]: time="2026-03-14T00:16:59.398259798Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 14 00:16:59.402345 containerd[1471]: time="2026-03-14T00:16:59.401840984Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 14 00:16:59.413742 containerd[1471]: time="2026-03-14T00:16:59.413345232Z" level=info msg="CreateContainer within sandbox \"6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:16:59.455680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3413920033.mount: Deactivated successfully. Mar 14 00:16:59.458453 containerd[1471]: time="2026-03-14T00:16:59.458346714Z" level=info msg="CreateContainer within sandbox \"6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145\"" Mar 14 00:16:59.461975 containerd[1471]: time="2026-03-14T00:16:59.461639959Z" level=info msg="StartContainer for \"fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145\"" Mar 14 00:16:59.582211 systemd[1]: Started cri-containerd-fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145.scope - libcontainer container fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145. Mar 14 00:16:59.662735 containerd[1471]: time="2026-03-14T00:16:59.662302933Z" level=info msg="StartContainer for \"fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145\" returns successfully" Mar 14 00:16:59.687382 systemd[1]: cri-containerd-fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145.scope: Deactivated successfully. Mar 14 00:16:59.738311 kubelet[2647]: E0314 00:16:59.737968 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:00.070357 containerd[1471]: time="2026-03-14T00:17:00.067280858Z" level=info msg="shim disconnected" id=fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145 namespace=k8s.io Mar 14 00:17:00.070357 containerd[1471]: time="2026-03-14T00:17:00.068001065Z" level=warning msg="cleaning up after shim disconnected" id=fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145 namespace=k8s.io Mar 14 00:17:00.070357 containerd[1471]: time="2026-03-14T00:17:00.068070530Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:00.452224 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145-rootfs.mount: Deactivated successfully. Mar 14 00:17:00.748113 kubelet[2647]: E0314 00:17:00.745476 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:00.782756 containerd[1471]: time="2026-03-14T00:17:00.782701003Z" level=info msg="CreateContainer within sandbox \"6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:17:00.849082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount369009998.mount: Deactivated successfully. Mar 14 00:17:00.907354 containerd[1471]: time="2026-03-14T00:17:00.907278036Z" level=info msg="CreateContainer within sandbox \"6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be\"" Mar 14 00:17:00.910177 containerd[1471]: time="2026-03-14T00:17:00.910062048Z" level=info msg="StartContainer for \"f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be\"" Mar 14 00:17:01.051914 systemd[1]: Started cri-containerd-f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be.scope - libcontainer container f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be. Mar 14 00:17:01.153069 containerd[1471]: time="2026-03-14T00:17:01.152760140Z" level=info msg="StartContainer for \"f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be\" returns successfully" Mar 14 00:17:01.176225 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:17:01.177741 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:17:01.178810 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:17:01.193406 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:17:01.194663 systemd[1]: cri-containerd-f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be.scope: Deactivated successfully. Mar 14 00:17:01.304445 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:17:01.367952 containerd[1471]: time="2026-03-14T00:17:01.367438357Z" level=info msg="shim disconnected" id=f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be namespace=k8s.io Mar 14 00:17:01.367952 containerd[1471]: time="2026-03-14T00:17:01.367609057Z" level=warning msg="cleaning up after shim disconnected" id=f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be namespace=k8s.io Mar 14 00:17:01.367952 containerd[1471]: time="2026-03-14T00:17:01.367625096Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:01.397713 containerd[1471]: time="2026-03-14T00:17:01.397437959Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:17:01Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:17:01.811953 kubelet[2647]: E0314 00:17:01.811111 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:01.849287 containerd[1471]: time="2026-03-14T00:17:01.848120136Z" level=info msg="CreateContainer within sandbox \"6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:17:01.940017 containerd[1471]: time="2026-03-14T00:17:01.939388248Z" level=info msg="CreateContainer within sandbox \"6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf\"" Mar 14 00:17:01.943251 containerd[1471]: time="2026-03-14T00:17:01.943152107Z" level=info msg="StartContainer for \"03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf\"" Mar 14 00:17:02.045128 systemd[1]: Started cri-containerd-03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf.scope - libcontainer container 03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf. Mar 14 00:17:02.157673 systemd[1]: cri-containerd-03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf.scope: Deactivated successfully. Mar 14 00:17:02.160900 containerd[1471]: time="2026-03-14T00:17:02.160607670Z" level=info msg="StartContainer for \"03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf\" returns successfully" Mar 14 00:17:02.359615 containerd[1471]: time="2026-03-14T00:17:02.359163150Z" level=info msg="shim disconnected" id=03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf namespace=k8s.io Mar 14 00:17:02.359615 containerd[1471]: time="2026-03-14T00:17:02.359237635Z" level=warning msg="cleaning up after shim disconnected" id=03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf namespace=k8s.io Mar 14 00:17:02.359615 containerd[1471]: time="2026-03-14T00:17:02.359254116Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:02.504377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf-rootfs.mount: Deactivated successfully. Mar 14 00:17:02.857462 kubelet[2647]: E0314 00:17:02.854950 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:02.874093 containerd[1471]: time="2026-03-14T00:17:02.873481875Z" level=info msg="CreateContainer within sandbox \"6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:17:02.945609 containerd[1471]: time="2026-03-14T00:17:02.945450618Z" level=info msg="CreateContainer within sandbox \"6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704\"" Mar 14 00:17:02.948973 containerd[1471]: time="2026-03-14T00:17:02.948937126Z" level=info msg="StartContainer for \"25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704\"" Mar 14 00:17:03.087223 systemd[1]: Started cri-containerd-25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704.scope - libcontainer container 25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704. Mar 14 00:17:03.107819 containerd[1471]: time="2026-03-14T00:17:03.107449393Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:03.110388 containerd[1471]: time="2026-03-14T00:17:03.110249676Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 14 00:17:03.112795 containerd[1471]: time="2026-03-14T00:17:03.112628313Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:17:03.119440 containerd[1471]: time="2026-03-14T00:17:03.119218787Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.717112832s" Mar 14 00:17:03.119662 containerd[1471]: time="2026-03-14T00:17:03.119465094Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 14 00:17:03.138976 containerd[1471]: time="2026-03-14T00:17:03.138914188Z" level=info msg="CreateContainer within sandbox \"b7d5abff8c0371fcb817365f81d6bdfef7abee713ab5c30aca50ea91f231550e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 14 00:17:03.162151 systemd[1]: cri-containerd-25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704.scope: Deactivated successfully. Mar 14 00:17:03.173287 containerd[1471]: time="2026-03-14T00:17:03.173131430Z" level=info msg="StartContainer for \"25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704\" returns successfully" Mar 14 00:17:03.199612 containerd[1471]: time="2026-03-14T00:17:03.199292565Z" level=info msg="CreateContainer within sandbox \"b7d5abff8c0371fcb817365f81d6bdfef7abee713ab5c30aca50ea91f231550e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b\"" Mar 14 00:17:03.200325 containerd[1471]: time="2026-03-14T00:17:03.200197608Z" level=info msg="StartContainer for \"64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b\"" Mar 14 00:17:03.379195 containerd[1471]: time="2026-03-14T00:17:03.378944945Z" level=info msg="shim disconnected" id=25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704 namespace=k8s.io Mar 14 00:17:03.379195 containerd[1471]: time="2026-03-14T00:17:03.379076624Z" level=warning msg="cleaning up after shim disconnected" id=25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704 namespace=k8s.io Mar 14 00:17:03.379195 containerd[1471]: time="2026-03-14T00:17:03.379096701Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:17:03.381460 systemd[1]: Started cri-containerd-64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b.scope - libcontainer container 64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b. Mar 14 00:17:03.467972 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704-rootfs.mount: Deactivated successfully. Mar 14 00:17:03.514597 containerd[1471]: time="2026-03-14T00:17:03.513581322Z" level=info msg="StartContainer for \"64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b\" returns successfully" Mar 14 00:17:03.869411 kubelet[2647]: E0314 00:17:03.869194 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:03.883895 kubelet[2647]: E0314 00:17:03.883797 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:03.888777 containerd[1471]: time="2026-03-14T00:17:03.886898143Z" level=info msg="CreateContainer within sandbox \"6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:17:03.960648 containerd[1471]: time="2026-03-14T00:17:03.960458952Z" level=info msg="CreateContainer within sandbox \"6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6\"" Mar 14 00:17:03.964857 containerd[1471]: time="2026-03-14T00:17:03.964678200Z" level=info msg="StartContainer for \"a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6\"" Mar 14 00:17:04.094005 systemd[1]: Started cri-containerd-a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6.scope - libcontainer container a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6. Mar 14 00:17:04.224954 kubelet[2647]: I0314 00:17:04.217580 2647 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-operator-78cf5644cb-rtkdl" podStartSLOduration=4.663685251 podStartE2EDuration="29.217400672s" podCreationTimestamp="2026-03-14 00:16:35 +0000 UTC" firstStartedPulling="2026-03-14 00:16:38.568252381 +0000 UTC m=+11.871295453" lastFinishedPulling="2026-03-14 00:17:03.121967803 +0000 UTC m=+36.425010874" observedRunningTime="2026-03-14 00:17:04.217241601 +0000 UTC m=+37.520284672" watchObservedRunningTime="2026-03-14 00:17:04.217400672 +0000 UTC m=+37.520443894" Mar 14 00:17:04.353621 containerd[1471]: time="2026-03-14T00:17:04.348648564Z" level=info msg="StartContainer for \"a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6\" returns successfully" Mar 14 00:17:04.791354 kubelet[2647]: I0314 00:17:04.790882 2647 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Mar 14 00:17:04.903075 systemd[1]: Created slice kubepods-burstable-pod4f7b3ae6_4d90_41ae_94fd_f2203e9588b4.slice - libcontainer container kubepods-burstable-pod4f7b3ae6_4d90_41ae_94fd_f2203e9588b4.slice. Mar 14 00:17:04.928021 kubelet[2647]: I0314 00:17:04.927918 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w45hr\" (UniqueName: \"kubernetes.io/projected/e8e9780a-5af8-43b4-b951-c9263bf78ceb-kube-api-access-w45hr\") pod \"coredns-7d764666f9-r8vvf\" (UID: \"e8e9780a-5af8-43b4-b951-c9263bf78ceb\") " pod="kube-system/coredns-7d764666f9-r8vvf" Mar 14 00:17:04.928021 kubelet[2647]: I0314 00:17:04.928013 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e8e9780a-5af8-43b4-b951-c9263bf78ceb-config-volume\") pod \"coredns-7d764666f9-r8vvf\" (UID: \"e8e9780a-5af8-43b4-b951-c9263bf78ceb\") " pod="kube-system/coredns-7d764666f9-r8vvf" Mar 14 00:17:04.928806 kubelet[2647]: I0314 00:17:04.928039 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f7b3ae6-4d90-41ae-94fd-f2203e9588b4-config-volume\") pod \"coredns-7d764666f9-ms9pj\" (UID: \"4f7b3ae6-4d90-41ae-94fd-f2203e9588b4\") " pod="kube-system/coredns-7d764666f9-ms9pj" Mar 14 00:17:04.928806 kubelet[2647]: I0314 00:17:04.928061 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4kmd\" (UniqueName: \"kubernetes.io/projected/4f7b3ae6-4d90-41ae-94fd-f2203e9588b4-kube-api-access-j4kmd\") pod \"coredns-7d764666f9-ms9pj\" (UID: \"4f7b3ae6-4d90-41ae-94fd-f2203e9588b4\") " pod="kube-system/coredns-7d764666f9-ms9pj" Mar 14 00:17:04.932366 systemd[1]: Created slice kubepods-burstable-pode8e9780a_5af8_43b4_b951_c9263bf78ceb.slice - libcontainer container kubepods-burstable-pode8e9780a_5af8_43b4_b951_c9263bf78ceb.slice. Mar 14 00:17:04.933864 kubelet[2647]: E0314 00:17:04.933365 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:04.934950 kubelet[2647]: E0314 00:17:04.934860 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:04.995964 kubelet[2647]: I0314 00:17:04.995336 2647 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-bmf9k" podStartSLOduration=4.015565644 podStartE2EDuration="29.995319264s" podCreationTimestamp="2026-03-14 00:16:35 +0000 UTC" firstStartedPulling="2026-03-14 00:16:37.89171185 +0000 UTC m=+11.194754931" lastFinishedPulling="2026-03-14 00:17:03.87146547 +0000 UTC m=+37.174508551" observedRunningTime="2026-03-14 00:17:04.981727349 +0000 UTC m=+38.284770439" watchObservedRunningTime="2026-03-14 00:17:04.995319264 +0000 UTC m=+38.298362335" Mar 14 00:17:05.597441 kubelet[2647]: E0314 00:17:05.594692 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:05.773892 containerd[1471]: time="2026-03-14T00:17:05.616037430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ms9pj,Uid:4f7b3ae6-4d90-41ae-94fd-f2203e9588b4,Namespace:kube-system,Attempt:0,}" Mar 14 00:17:05.979436 kubelet[2647]: E0314 00:17:05.979130 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:06.155578 kubelet[2647]: E0314 00:17:06.154481 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:06.169701 containerd[1471]: time="2026-03-14T00:17:06.164954915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-r8vvf,Uid:e8e9780a-5af8-43b4-b951-c9263bf78ceb,Namespace:kube-system,Attempt:0,}" Mar 14 00:17:06.977132 kubelet[2647]: E0314 00:17:06.976288 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:08.846381 systemd-networkd[1393]: cilium_host: Link UP Mar 14 00:17:08.851022 systemd-networkd[1393]: cilium_net: Link UP Mar 14 00:17:08.851444 systemd-networkd[1393]: cilium_net: Gained carrier Mar 14 00:17:08.851968 systemd-networkd[1393]: cilium_host: Gained carrier Mar 14 00:17:08.853202 systemd-networkd[1393]: cilium_net: Gained IPv6LL Mar 14 00:17:09.550964 systemd-networkd[1393]: cilium_vxlan: Link UP Mar 14 00:17:09.550978 systemd-networkd[1393]: cilium_vxlan: Gained carrier Mar 14 00:17:09.898969 systemd-networkd[1393]: cilium_host: Gained IPv6LL Mar 14 00:17:10.210000 kernel: NET: Registered PF_ALG protocol family Mar 14 00:17:11.089893 systemd-networkd[1393]: cilium_vxlan: Gained IPv6LL Mar 14 00:17:14.086054 kubelet[2647]: E0314 00:17:14.086014 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:14.294412 systemd-networkd[1393]: lxc_health: Link UP Mar 14 00:17:14.320134 systemd-networkd[1393]: lxc_health: Gained carrier Mar 14 00:17:14.677774 kernel: eth0: renamed from tmp45ae3 Mar 14 00:17:14.683073 systemd-networkd[1393]: lxc8905f7f2c86b: Link UP Mar 14 00:17:14.711619 kernel: eth0: renamed from tmp349c9 Mar 14 00:17:14.707647 systemd-networkd[1393]: lxccbf82d8e5408: Link UP Mar 14 00:17:14.734669 systemd-networkd[1393]: lxc8905f7f2c86b: Gained carrier Mar 14 00:17:14.735159 systemd-networkd[1393]: lxccbf82d8e5408: Gained carrier Mar 14 00:17:15.383404 systemd-networkd[1393]: lxc_health: Gained IPv6LL Mar 14 00:17:15.961647 systemd-networkd[1393]: lxccbf82d8e5408: Gained IPv6LL Mar 14 00:17:16.050823 kubelet[2647]: E0314 00:17:16.047448 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:16.588284 systemd-networkd[1393]: lxc8905f7f2c86b: Gained IPv6LL Mar 14 00:17:16.658980 kubelet[2647]: E0314 00:17:16.655103 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:23.385487 sudo[1654]: pam_unix(sudo:session): session closed for user root Mar 14 00:17:23.401909 sshd[1651]: pam_unix(sshd:session): session closed for user core Mar 14 00:17:23.423151 systemd[1]: sshd@8-10.0.0.36:22-10.0.0.1:43942.service: Deactivated successfully. Mar 14 00:17:23.436045 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:17:23.436344 systemd[1]: session-9.scope: Consumed 22.499s CPU time, 164.5M memory peak, 0B memory swap peak. Mar 14 00:17:23.441705 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:17:23.444948 systemd-logind[1451]: Removed session 9. Mar 14 00:17:24.467186 containerd[1471]: time="2026-03-14T00:17:24.466584115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:17:24.467186 containerd[1471]: time="2026-03-14T00:17:24.466732698Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:17:24.467186 containerd[1471]: time="2026-03-14T00:17:24.466753451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:24.468231 containerd[1471]: time="2026-03-14T00:17:24.467677021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:17:24.468231 containerd[1471]: time="2026-03-14T00:17:24.467771835Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:17:24.468231 containerd[1471]: time="2026-03-14T00:17:24.467791426Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:24.468231 containerd[1471]: time="2026-03-14T00:17:24.467901420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:24.469874 containerd[1471]: time="2026-03-14T00:17:24.469476395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:17:24.530033 systemd[1]: Started cri-containerd-349c9d6d0dc6af489a85115995ec8a6c10d3d418343d25b2ca443795d413d0ae.scope - libcontainer container 349c9d6d0dc6af489a85115995ec8a6c10d3d418343d25b2ca443795d413d0ae. Mar 14 00:17:24.535208 systemd[1]: Started cri-containerd-45ae37a02d092e48c0bbc7a8e63a520c20aef425170a46fcf09a75a9cb5b0afd.scope - libcontainer container 45ae37a02d092e48c0bbc7a8e63a520c20aef425170a46fcf09a75a9cb5b0afd. Mar 14 00:17:24.567294 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:17:24.575645 systemd-resolved[1338]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:17:24.626997 containerd[1471]: time="2026-03-14T00:17:24.626461447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-ms9pj,Uid:4f7b3ae6-4d90-41ae-94fd-f2203e9588b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"349c9d6d0dc6af489a85115995ec8a6c10d3d418343d25b2ca443795d413d0ae\"" Mar 14 00:17:24.632147 kubelet[2647]: E0314 00:17:24.632030 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:24.641674 containerd[1471]: time="2026-03-14T00:17:24.640251990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-r8vvf,Uid:e8e9780a-5af8-43b4-b951-c9263bf78ceb,Namespace:kube-system,Attempt:0,} returns sandbox id \"45ae37a02d092e48c0bbc7a8e63a520c20aef425170a46fcf09a75a9cb5b0afd\"" Mar 14 00:17:24.643205 kubelet[2647]: E0314 00:17:24.643127 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:24.658795 containerd[1471]: time="2026-03-14T00:17:24.658658512Z" level=info msg="CreateContainer within sandbox \"45ae37a02d092e48c0bbc7a8e63a520c20aef425170a46fcf09a75a9cb5b0afd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:17:24.677632 containerd[1471]: time="2026-03-14T00:17:24.677402710Z" level=info msg="CreateContainer within sandbox \"349c9d6d0dc6af489a85115995ec8a6c10d3d418343d25b2ca443795d413d0ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:17:24.693776 containerd[1471]: time="2026-03-14T00:17:24.693663857Z" level=info msg="CreateContainer within sandbox \"45ae37a02d092e48c0bbc7a8e63a520c20aef425170a46fcf09a75a9cb5b0afd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"da6ecdc8dda014799f93fda2f7db8865950cdc87e8a2bdf180a034ee71e0c476\"" Mar 14 00:17:24.696888 containerd[1471]: time="2026-03-14T00:17:24.695857882Z" level=info msg="StartContainer for \"da6ecdc8dda014799f93fda2f7db8865950cdc87e8a2bdf180a034ee71e0c476\"" Mar 14 00:17:24.740590 containerd[1471]: time="2026-03-14T00:17:24.739810363Z" level=info msg="CreateContainer within sandbox \"349c9d6d0dc6af489a85115995ec8a6c10d3d418343d25b2ca443795d413d0ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e7ccd81b7093f565809a0b3b8dc26641a262ee26151179a6151eabe2f3dc5375\"" Mar 14 00:17:24.742327 containerd[1471]: time="2026-03-14T00:17:24.742250930Z" level=info msg="StartContainer for \"e7ccd81b7093f565809a0b3b8dc26641a262ee26151179a6151eabe2f3dc5375\"" Mar 14 00:17:24.761434 systemd[1]: Started cri-containerd-da6ecdc8dda014799f93fda2f7db8865950cdc87e8a2bdf180a034ee71e0c476.scope - libcontainer container da6ecdc8dda014799f93fda2f7db8865950cdc87e8a2bdf180a034ee71e0c476. Mar 14 00:17:24.819721 systemd[1]: Started cri-containerd-e7ccd81b7093f565809a0b3b8dc26641a262ee26151179a6151eabe2f3dc5375.scope - libcontainer container e7ccd81b7093f565809a0b3b8dc26641a262ee26151179a6151eabe2f3dc5375. Mar 14 00:17:24.822819 containerd[1471]: time="2026-03-14T00:17:24.822679278Z" level=info msg="StartContainer for \"da6ecdc8dda014799f93fda2f7db8865950cdc87e8a2bdf180a034ee71e0c476\" returns successfully" Mar 14 00:17:24.883010 containerd[1471]: time="2026-03-14T00:17:24.882648396Z" level=info msg="StartContainer for \"e7ccd81b7093f565809a0b3b8dc26641a262ee26151179a6151eabe2f3dc5375\" returns successfully" Mar 14 00:17:25.388187 kubelet[2647]: E0314 00:17:25.387852 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:25.392312 kubelet[2647]: E0314 00:17:25.392089 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:25.442086 kubelet[2647]: I0314 00:17:25.441680 2647 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-r8vvf" podStartSLOduration=54.441665992 podStartE2EDuration="54.441665992s" podCreationTimestamp="2026-03-14 00:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:17:25.414992889 +0000 UTC m=+58.718035980" watchObservedRunningTime="2026-03-14 00:17:25.441665992 +0000 UTC m=+58.744709083" Mar 14 00:17:25.442086 kubelet[2647]: I0314 00:17:25.441871 2647 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-ms9pj" podStartSLOduration=54.441868174 podStartE2EDuration="54.441868174s" podCreationTimestamp="2026-03-14 00:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:17:25.439462284 +0000 UTC m=+58.742505355" watchObservedRunningTime="2026-03-14 00:17:25.441868174 +0000 UTC m=+58.744911246" Mar 14 00:17:26.396685 kubelet[2647]: E0314 00:17:26.395563 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:26.396685 kubelet[2647]: E0314 00:17:26.396244 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:31.577402 kubelet[2647]: E0314 00:17:31.575257 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:31.577402 kubelet[2647]: E0314 00:17:31.576325 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:31.597426 kubelet[2647]: E0314 00:17:31.597191 2647 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.088s" Mar 14 00:17:31.615808 kubelet[2647]: E0314 00:17:31.602380 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:37.024210 kubelet[2647]: E0314 00:17:37.024034 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:43.031012 kubelet[2647]: E0314 00:17:43.024790 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:17:47.025312 kubelet[2647]: E0314 00:17:47.024796 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:18:13.575879 systemd[1]: Started sshd@9-10.0.0.36:22-10.0.0.1:41850.service - OpenSSH per-connection server daemon (10.0.0.1:41850). Mar 14 00:18:13.923482 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 41850 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:18:14.005445 sshd[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:14.149710 systemd-logind[1451]: New session 10 of user core. Mar 14 00:18:14.156010 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:18:19.002265 sshd[4221]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:19.478936 systemd[1]: sshd@9-10.0.0.36:22-10.0.0.1:41850.service: Deactivated successfully. Mar 14 00:18:19.487872 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:18:19.489035 systemd[1]: session-10.scope: Consumed 2.155s CPU time. Mar 14 00:18:19.493737 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:18:19.499373 systemd-logind[1451]: Removed session 10. Mar 14 00:18:24.036161 systemd[1]: Started sshd@10-10.0.0.36:22-10.0.0.1:52920.service - OpenSSH per-connection server daemon (10.0.0.1:52920). Mar 14 00:18:24.136887 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 52920 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:18:24.142766 sshd[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:24.160224 systemd-logind[1451]: New session 11 of user core. Mar 14 00:18:24.168221 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:18:24.498731 sshd[4237]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:24.508239 systemd[1]: sshd@10-10.0.0.36:22-10.0.0.1:52920.service: Deactivated successfully. Mar 14 00:18:24.513021 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:18:24.516284 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:18:24.519739 systemd-logind[1451]: Removed session 11. Mar 14 00:18:29.571291 systemd[1]: Started sshd@11-10.0.0.36:22-10.0.0.1:52930.service - OpenSSH per-connection server daemon (10.0.0.1:52930). Mar 14 00:18:29.684309 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 52930 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:18:29.687944 sshd[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:29.740403 systemd-logind[1451]: New session 12 of user core. Mar 14 00:18:29.764388 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:18:30.164925 sshd[4254]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:30.176979 systemd[1]: sshd@11-10.0.0.36:22-10.0.0.1:52930.service: Deactivated successfully. Mar 14 00:18:30.181105 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:18:30.184834 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:18:30.190392 systemd-logind[1451]: Removed session 12. Mar 14 00:18:31.025600 kubelet[2647]: E0314 00:18:31.022714 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:18:33.025551 kubelet[2647]: E0314 00:18:33.024034 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:18:34.026843 kubelet[2647]: E0314 00:18:34.026798 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:18:35.195994 systemd[1]: Started sshd@12-10.0.0.36:22-10.0.0.1:52030.service - OpenSSH per-connection server daemon (10.0.0.1:52030). Mar 14 00:18:35.274308 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 52030 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:18:35.278244 sshd[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:35.307293 systemd-logind[1451]: New session 13 of user core. Mar 14 00:18:35.315944 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:18:35.654218 sshd[4269]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:35.662251 systemd[1]: sshd@12-10.0.0.36:22-10.0.0.1:52030.service: Deactivated successfully. Mar 14 00:18:35.669149 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:18:35.683206 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:18:35.689727 systemd-logind[1451]: Removed session 13. Mar 14 00:18:40.683141 systemd[1]: Started sshd@13-10.0.0.36:22-10.0.0.1:42876.service - OpenSSH per-connection server daemon (10.0.0.1:42876). Mar 14 00:18:40.799192 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 42876 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:18:40.810171 sshd[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:40.828049 systemd-logind[1451]: New session 14 of user core. Mar 14 00:18:40.851409 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:18:41.208427 sshd[4287]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:41.221954 systemd[1]: sshd@13-10.0.0.36:22-10.0.0.1:42876.service: Deactivated successfully. Mar 14 00:18:41.236295 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:18:41.240133 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:18:41.250047 systemd-logind[1451]: Removed session 14. Mar 14 00:18:46.254311 systemd[1]: Started sshd@14-10.0.0.36:22-10.0.0.1:42888.service - OpenSSH per-connection server daemon (10.0.0.1:42888). Mar 14 00:18:46.358264 sshd[4302]: Accepted publickey for core from 10.0.0.1 port 42888 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:18:46.361414 sshd[4302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:46.410669 systemd-logind[1451]: New session 15 of user core. Mar 14 00:18:46.438323 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:18:46.989283 sshd[4302]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:47.015316 systemd[1]: sshd@14-10.0.0.36:22-10.0.0.1:42888.service: Deactivated successfully. Mar 14 00:18:47.021907 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:18:47.051276 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:18:47.057640 systemd-logind[1451]: Removed session 15. Mar 14 00:18:51.029677 kubelet[2647]: E0314 00:18:51.025852 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:18:52.024787 kubelet[2647]: E0314 00:18:52.022737 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:18:52.070696 systemd[1]: Started sshd@15-10.0.0.36:22-10.0.0.1:47574.service - OpenSSH per-connection server daemon (10.0.0.1:47574). Mar 14 00:18:52.147055 sshd[4318]: Accepted publickey for core from 10.0.0.1 port 47574 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:18:52.150350 sshd[4318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:52.168458 systemd-logind[1451]: New session 16 of user core. Mar 14 00:18:52.185230 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:18:52.593458 sshd[4318]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:52.602356 systemd[1]: sshd@15-10.0.0.36:22-10.0.0.1:47574.service: Deactivated successfully. Mar 14 00:18:52.609365 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:18:52.619453 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:18:52.641481 systemd-logind[1451]: Removed session 16. Mar 14 00:18:54.022453 kubelet[2647]: E0314 00:18:54.021428 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:18:57.671611 systemd[1]: Started sshd@16-10.0.0.36:22-10.0.0.1:47580.service - OpenSSH per-connection server daemon (10.0.0.1:47580). Mar 14 00:18:57.790726 sshd[4334]: Accepted publickey for core from 10.0.0.1 port 47580 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:18:57.793782 sshd[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:18:57.823640 systemd-logind[1451]: New session 17 of user core. Mar 14 00:18:57.844915 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:18:58.266894 sshd[4334]: pam_unix(sshd:session): session closed for user core Mar 14 00:18:58.283731 systemd[1]: sshd@16-10.0.0.36:22-10.0.0.1:47580.service: Deactivated successfully. Mar 14 00:18:58.291885 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:18:58.301099 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:18:58.313359 systemd-logind[1451]: Removed session 17. Mar 14 00:19:03.297843 systemd[1]: Started sshd@17-10.0.0.36:22-10.0.0.1:45798.service - OpenSSH per-connection server daemon (10.0.0.1:45798). Mar 14 00:19:03.474918 sshd[4350]: Accepted publickey for core from 10.0.0.1 port 45798 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:19:03.485823 sshd[4350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:19:03.518379 systemd-logind[1451]: New session 18 of user core. Mar 14 00:19:03.549948 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:19:04.077803 sshd[4350]: pam_unix(sshd:session): session closed for user core Mar 14 00:19:04.096991 systemd[1]: sshd@17-10.0.0.36:22-10.0.0.1:45798.service: Deactivated successfully. Mar 14 00:19:04.107826 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:19:04.111153 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:19:04.114937 systemd-logind[1451]: Removed session 18. Mar 14 00:19:07.027954 kubelet[2647]: E0314 00:19:07.024755 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:19:09.115785 systemd[1]: Started sshd@18-10.0.0.36:22-10.0.0.1:45808.service - OpenSSH per-connection server daemon (10.0.0.1:45808). Mar 14 00:19:09.229987 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 45808 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:19:09.238904 sshd[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:19:09.257271 systemd-logind[1451]: New session 19 of user core. Mar 14 00:19:09.267908 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:19:10.022013 sshd[4367]: pam_unix(sshd:session): session closed for user core Mar 14 00:19:10.081682 systemd[1]: sshd@18-10.0.0.36:22-10.0.0.1:45808.service: Deactivated successfully. Mar 14 00:19:10.094078 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:19:10.095656 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:19:10.118484 systemd[1]: Started sshd@19-10.0.0.36:22-10.0.0.1:39776.service - OpenSSH per-connection server daemon (10.0.0.1:39776). Mar 14 00:19:10.137678 systemd-logind[1451]: Removed session 19. Mar 14 00:19:10.301648 sshd[4385]: Accepted publickey for core from 10.0.0.1 port 39776 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:19:10.304645 sshd[4385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:19:10.328059 systemd-logind[1451]: New session 20 of user core. Mar 14 00:19:10.354268 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:19:10.871182 sshd[4385]: pam_unix(sshd:session): session closed for user core Mar 14 00:19:10.897906 systemd[1]: sshd@19-10.0.0.36:22-10.0.0.1:39776.service: Deactivated successfully. Mar 14 00:19:10.901268 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:19:10.903068 systemd-logind[1451]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:19:10.928405 systemd[1]: Started sshd@20-10.0.0.36:22-10.0.0.1:39788.service - OpenSSH per-connection server daemon (10.0.0.1:39788). Mar 14 00:19:10.939706 systemd-logind[1451]: Removed session 20. Mar 14 00:19:11.106162 sshd[4399]: Accepted publickey for core from 10.0.0.1 port 39788 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:19:11.116197 sshd[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:19:11.158981 systemd-logind[1451]: New session 21 of user core. Mar 14 00:19:11.173962 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:19:11.676093 sshd[4399]: pam_unix(sshd:session): session closed for user core Mar 14 00:19:11.691362 systemd[1]: sshd@20-10.0.0.36:22-10.0.0.1:39788.service: Deactivated successfully. Mar 14 00:19:11.697031 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:19:11.701357 systemd-logind[1451]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:19:11.706874 systemd-logind[1451]: Removed session 21. Mar 14 00:19:13.024255 kubelet[2647]: E0314 00:19:13.023206 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:19:16.704794 systemd[1]: Started sshd@21-10.0.0.36:22-10.0.0.1:39792.service - OpenSSH per-connection server daemon (10.0.0.1:39792). Mar 14 00:19:16.797242 sshd[4415]: Accepted publickey for core from 10.0.0.1 port 39792 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:19:16.801699 sshd[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:19:16.840425 systemd-logind[1451]: New session 22 of user core. Mar 14 00:19:16.847464 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:19:17.196743 sshd[4415]: pam_unix(sshd:session): session closed for user core Mar 14 00:19:17.208480 systemd[1]: sshd@21-10.0.0.36:22-10.0.0.1:39792.service: Deactivated successfully. Mar 14 00:19:17.214277 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:19:17.219196 systemd-logind[1451]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:19:17.228349 systemd-logind[1451]: Removed session 22. Mar 14 00:19:22.229462 systemd[1]: Started sshd@22-10.0.0.36:22-10.0.0.1:50682.service - OpenSSH per-connection server daemon (10.0.0.1:50682). Mar 14 00:19:22.309299 sshd[4430]: Accepted publickey for core from 10.0.0.1 port 50682 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:19:22.311775 sshd[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:19:22.351365 systemd-logind[1451]: New session 23 of user core. Mar 14 00:19:22.369242 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:19:22.876231 sshd[4430]: pam_unix(sshd:session): session closed for user core Mar 14 00:19:22.900258 systemd[1]: sshd@22-10.0.0.36:22-10.0.0.1:50682.service: Deactivated successfully. Mar 14 00:19:22.905254 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:19:22.918480 systemd-logind[1451]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:19:22.925190 systemd-logind[1451]: Removed session 23. Mar 14 00:19:27.924402 systemd[1]: Started sshd@23-10.0.0.36:22-10.0.0.1:50692.service - OpenSSH per-connection server daemon (10.0.0.1:50692). Mar 14 00:19:27.999597 sshd[4446]: Accepted publickey for core from 10.0.0.1 port 50692 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:19:28.007923 sshd[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:19:28.045652 systemd-logind[1451]: New session 24 of user core. Mar 14 00:19:28.065264 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 14 00:19:28.391371 sshd[4446]: pam_unix(sshd:session): session closed for user core Mar 14 00:19:28.416018 systemd[1]: sshd@23-10.0.0.36:22-10.0.0.1:50692.service: Deactivated successfully. Mar 14 00:19:28.417195 systemd-logind[1451]: Session 24 logged out. Waiting for processes to exit. Mar 14 00:19:28.435325 systemd[1]: session-24.scope: Deactivated successfully. Mar 14 00:19:28.439317 systemd-logind[1451]: Removed session 24. Mar 14 00:19:33.432272 systemd[1]: Started sshd@24-10.0.0.36:22-10.0.0.1:33050.service - OpenSSH per-connection server daemon (10.0.0.1:33050). Mar 14 00:19:33.506190 sshd[4461]: Accepted publickey for core from 10.0.0.1 port 33050 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:19:33.519000 sshd[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:19:33.546894 systemd-logind[1451]: New session 25 of user core. Mar 14 00:19:33.559043 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 14 00:19:33.817252 sshd[4461]: pam_unix(sshd:session): session closed for user core Mar 14 00:19:33.827641 systemd[1]: sshd@24-10.0.0.36:22-10.0.0.1:33050.service: Deactivated successfully. Mar 14 00:19:33.831193 systemd[1]: session-25.scope: Deactivated successfully. Mar 14 00:19:33.844473 systemd-logind[1451]: Session 25 logged out. Waiting for processes to exit. Mar 14 00:19:33.848442 systemd-logind[1451]: Removed session 25. Mar 14 00:19:34.045349 kubelet[2647]: E0314 00:19:34.033118 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:19:38.024487 kubelet[2647]: E0314 00:19:38.023372 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:19:38.904432 systemd[1]: Started sshd@25-10.0.0.36:22-10.0.0.1:33066.service - OpenSSH per-connection server daemon (10.0.0.1:33066). Mar 14 00:19:39.024886 sshd[4475]: Accepted publickey for core from 10.0.0.1 port 33066 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:19:39.027658 sshd[4475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:19:39.054727 systemd-logind[1451]: New session 26 of user core. Mar 14 00:19:39.071974 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 14 00:19:39.431854 sshd[4475]: pam_unix(sshd:session): session closed for user core Mar 14 00:19:39.449107 systemd[1]: sshd@25-10.0.0.36:22-10.0.0.1:33066.service: Deactivated successfully. Mar 14 00:19:39.458290 systemd[1]: session-26.scope: Deactivated successfully. Mar 14 00:19:39.462189 systemd-logind[1451]: Session 26 logged out. Waiting for processes to exit. Mar 14 00:19:39.473268 systemd-logind[1451]: Removed session 26. Mar 14 00:19:44.515200 systemd[1]: Started sshd@26-10.0.0.36:22-10.0.0.1:53976.service - OpenSSH per-connection server daemon (10.0.0.1:53976). Mar 14 00:19:44.652781 sshd[4491]: Accepted publickey for core from 10.0.0.1 port 53976 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:19:44.658965 sshd[4491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:19:44.696247 systemd-logind[1451]: New session 27 of user core. Mar 14 00:19:44.704852 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 14 00:19:45.111194 sshd[4491]: pam_unix(sshd:session): session closed for user core Mar 14 00:19:45.133418 systemd[1]: sshd@26-10.0.0.36:22-10.0.0.1:53976.service: Deactivated successfully. Mar 14 00:19:45.160816 systemd[1]: session-27.scope: Deactivated successfully. Mar 14 00:19:45.176450 systemd-logind[1451]: Session 27 logged out. Waiting for processes to exit. Mar 14 00:19:45.182368 systemd-logind[1451]: Removed session 27. Mar 14 00:19:47.038845 kubelet[2647]: E0314 00:19:47.037978 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:19:50.151426 systemd[1]: Started sshd@27-10.0.0.36:22-10.0.0.1:51184.service - OpenSSH per-connection server daemon (10.0.0.1:51184). Mar 14 00:19:50.278664 sshd[4506]: Accepted publickey for core from 10.0.0.1 port 51184 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:19:50.283850 sshd[4506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:19:50.301883 systemd-logind[1451]: New session 28 of user core. Mar 14 00:19:50.310838 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 14 00:19:50.618857 sshd[4506]: pam_unix(sshd:session): session closed for user core Mar 14 00:19:50.625301 systemd[1]: sshd@27-10.0.0.36:22-10.0.0.1:51184.service: Deactivated successfully. Mar 14 00:19:50.629278 systemd[1]: session-28.scope: Deactivated successfully. Mar 14 00:19:50.633082 systemd-logind[1451]: Session 28 logged out. Waiting for processes to exit. Mar 14 00:19:50.640217 systemd-logind[1451]: Removed session 28. Mar 14 00:19:55.029833 kubelet[2647]: E0314 00:19:55.026106 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:19:55.714896 systemd[1]: Started sshd@28-10.0.0.36:22-10.0.0.1:51198.service - OpenSSH per-connection server daemon (10.0.0.1:51198). Mar 14 00:19:55.938269 sshd[4521]: Accepted publickey for core from 10.0.0.1 port 51198 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:19:55.977011 sshd[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:19:56.029596 systemd-logind[1451]: New session 29 of user core. Mar 14 00:19:56.060167 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 14 00:19:56.698741 sshd[4521]: pam_unix(sshd:session): session closed for user core Mar 14 00:19:56.744874 systemd[1]: sshd@28-10.0.0.36:22-10.0.0.1:51198.service: Deactivated successfully. Mar 14 00:19:56.762957 systemd[1]: session-29.scope: Deactivated successfully. Mar 14 00:19:56.779672 systemd-logind[1451]: Session 29 logged out. Waiting for processes to exit. Mar 14 00:19:56.800067 systemd[1]: Started sshd@29-10.0.0.36:22-10.0.0.1:51210.service - OpenSSH per-connection server daemon (10.0.0.1:51210). Mar 14 00:19:56.802990 systemd-logind[1451]: Removed session 29. Mar 14 00:19:56.979132 sshd[4535]: Accepted publickey for core from 10.0.0.1 port 51210 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:19:56.990306 sshd[4535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:19:57.028091 systemd-logind[1451]: New session 30 of user core. Mar 14 00:19:57.083281 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 14 00:19:58.791692 sshd[4535]: pam_unix(sshd:session): session closed for user core Mar 14 00:19:58.818022 systemd[1]: sshd@29-10.0.0.36:22-10.0.0.1:51210.service: Deactivated successfully. Mar 14 00:19:58.823884 systemd[1]: session-30.scope: Deactivated successfully. Mar 14 00:19:58.831862 systemd-logind[1451]: Session 30 logged out. Waiting for processes to exit. Mar 14 00:19:58.855442 systemd[1]: Started sshd@30-10.0.0.36:22-10.0.0.1:51216.service - OpenSSH per-connection server daemon (10.0.0.1:51216). Mar 14 00:19:58.862060 systemd-logind[1451]: Removed session 30. Mar 14 00:19:59.083744 sshd[4548]: Accepted publickey for core from 10.0.0.1 port 51216 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:19:59.086915 sshd[4548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:19:59.120607 systemd-logind[1451]: New session 31 of user core. Mar 14 00:19:59.170298 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 14 00:20:01.082284 sshd[4548]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:01.105291 systemd[1]: sshd@30-10.0.0.36:22-10.0.0.1:51216.service: Deactivated successfully. Mar 14 00:20:01.114142 systemd[1]: session-31.scope: Deactivated successfully. Mar 14 00:20:01.114600 systemd[1]: session-31.scope: Consumed 1.228s CPU time. Mar 14 00:20:01.122819 systemd-logind[1451]: Session 31 logged out. Waiting for processes to exit. Mar 14 00:20:01.153360 systemd[1]: Started sshd@31-10.0.0.36:22-10.0.0.1:60660.service - OpenSSH per-connection server daemon (10.0.0.1:60660). Mar 14 00:20:01.159141 systemd-logind[1451]: Removed session 31. Mar 14 00:20:01.332873 sshd[4570]: Accepted publickey for core from 10.0.0.1 port 60660 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:01.343120 sshd[4570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:01.407141 systemd-logind[1451]: New session 32 of user core. Mar 14 00:20:01.436844 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 14 00:20:02.897351 sshd[4570]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:02.934842 systemd[1]: sshd@31-10.0.0.36:22-10.0.0.1:60660.service: Deactivated successfully. Mar 14 00:20:02.955946 systemd[1]: session-32.scope: Deactivated successfully. Mar 14 00:20:02.972700 systemd-logind[1451]: Session 32 logged out. Waiting for processes to exit. Mar 14 00:20:03.006173 systemd[1]: Started sshd@32-10.0.0.36:22-10.0.0.1:60666.service - OpenSSH per-connection server daemon (10.0.0.1:60666). Mar 14 00:20:03.012236 systemd-logind[1451]: Removed session 32. Mar 14 00:20:03.162315 sshd[4584]: Accepted publickey for core from 10.0.0.1 port 60666 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:03.162904 sshd[4584]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:03.202459 systemd-logind[1451]: New session 33 of user core. Mar 14 00:20:03.213564 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 14 00:20:03.564636 sshd[4584]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:03.577320 systemd[1]: sshd@32-10.0.0.36:22-10.0.0.1:60666.service: Deactivated successfully. Mar 14 00:20:03.603875 systemd[1]: session-33.scope: Deactivated successfully. Mar 14 00:20:03.621751 systemd-logind[1451]: Session 33 logged out. Waiting for processes to exit. Mar 14 00:20:03.624126 systemd-logind[1451]: Removed session 33. Mar 14 00:20:08.623623 systemd[1]: Started sshd@33-10.0.0.36:22-10.0.0.1:60668.service - OpenSSH per-connection server daemon (10.0.0.1:60668). Mar 14 00:20:08.787784 sshd[4598]: Accepted publickey for core from 10.0.0.1 port 60668 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:08.794909 sshd[4598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:08.834176 systemd-logind[1451]: New session 34 of user core. Mar 14 00:20:08.864157 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 14 00:20:09.044155 kubelet[2647]: E0314 00:20:09.037289 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:20:09.339883 sshd[4598]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:09.358335 systemd[1]: sshd@33-10.0.0.36:22-10.0.0.1:60668.service: Deactivated successfully. Mar 14 00:20:09.375184 systemd[1]: session-34.scope: Deactivated successfully. Mar 14 00:20:09.383260 systemd-logind[1451]: Session 34 logged out. Waiting for processes to exit. Mar 14 00:20:09.390054 systemd-logind[1451]: Removed session 34. Mar 14 00:20:11.068037 kubelet[2647]: E0314 00:20:11.067933 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:20:14.389395 systemd[1]: Started sshd@34-10.0.0.36:22-10.0.0.1:45654.service - OpenSSH per-connection server daemon (10.0.0.1:45654). Mar 14 00:20:14.607632 sshd[4614]: Accepted publickey for core from 10.0.0.1 port 45654 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:14.611104 sshd[4614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:14.659132 systemd-logind[1451]: New session 35 of user core. Mar 14 00:20:14.676040 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 14 00:20:15.178955 sshd[4614]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:15.203243 systemd[1]: sshd@34-10.0.0.36:22-10.0.0.1:45654.service: Deactivated successfully. Mar 14 00:20:15.213759 systemd[1]: session-35.scope: Deactivated successfully. Mar 14 00:20:15.223840 systemd-logind[1451]: Session 35 logged out. Waiting for processes to exit. Mar 14 00:20:15.231255 systemd-logind[1451]: Removed session 35. Mar 14 00:20:15.384778 update_engine[1454]: I20260314 00:20:15.381314 1454 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 14 00:20:15.384778 update_engine[1454]: I20260314 00:20:15.381402 1454 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 14 00:20:15.384778 update_engine[1454]: I20260314 00:20:15.382256 1454 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 14 00:20:15.386170 update_engine[1454]: I20260314 00:20:15.386137 1454 omaha_request_params.cc:62] Current group set to lts Mar 14 00:20:15.403652 update_engine[1454]: I20260314 00:20:15.403481 1454 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 14 00:20:15.403844 update_engine[1454]: I20260314 00:20:15.403811 1454 update_attempter.cc:643] Scheduling an action processor start. Mar 14 00:20:15.413723 update_engine[1454]: I20260314 00:20:15.405324 1454 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 14 00:20:15.413723 update_engine[1454]: I20260314 00:20:15.405704 1454 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 14 00:20:15.413723 update_engine[1454]: I20260314 00:20:15.405931 1454 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 14 00:20:15.413723 update_engine[1454]: I20260314 00:20:15.408594 1454 omaha_request_action.cc:272] Request: Mar 14 00:20:15.413723 update_engine[1454]: Mar 14 00:20:15.413723 update_engine[1454]: Mar 14 00:20:15.413723 update_engine[1454]: Mar 14 00:20:15.413723 update_engine[1454]: Mar 14 00:20:15.413723 update_engine[1454]: Mar 14 00:20:15.413723 update_engine[1454]: Mar 14 00:20:15.413723 update_engine[1454]: Mar 14 00:20:15.413723 update_engine[1454]: Mar 14 00:20:15.413723 update_engine[1454]: I20260314 00:20:15.408672 1454 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:20:15.424883 update_engine[1454]: I20260314 00:20:15.424816 1454 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:20:15.431732 update_engine[1454]: I20260314 00:20:15.430932 1454 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:20:15.467664 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 14 00:20:15.469657 update_engine[1454]: E20260314 00:20:15.467284 1454 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:20:15.469657 update_engine[1454]: I20260314 00:20:15.467654 1454 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 14 00:20:20.222769 systemd[1]: Started sshd@35-10.0.0.36:22-10.0.0.1:58390.service - OpenSSH per-connection server daemon (10.0.0.1:58390). Mar 14 00:20:20.292119 sshd[4628]: Accepted publickey for core from 10.0.0.1 port 58390 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:20.305366 sshd[4628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:20.344209 systemd-logind[1451]: New session 36 of user core. Mar 14 00:20:20.357916 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 14 00:20:20.829107 sshd[4628]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:20.858713 systemd[1]: sshd@35-10.0.0.36:22-10.0.0.1:58390.service: Deactivated successfully. Mar 14 00:20:20.866387 systemd[1]: session-36.scope: Deactivated successfully. Mar 14 00:20:20.872059 systemd-logind[1451]: Session 36 logged out. Waiting for processes to exit. Mar 14 00:20:20.895300 systemd-logind[1451]: Removed session 36. Mar 14 00:20:24.027115 kubelet[2647]: E0314 00:20:24.021213 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:20:25.382915 update_engine[1454]: I20260314 00:20:25.382622 1454 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:20:25.384228 update_engine[1454]: I20260314 00:20:25.383104 1454 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:20:25.384228 update_engine[1454]: I20260314 00:20:25.383725 1454 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:20:25.407054 update_engine[1454]: E20260314 00:20:25.406839 1454 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:20:25.407054 update_engine[1454]: I20260314 00:20:25.406968 1454 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 14 00:20:25.872951 systemd[1]: Started sshd@36-10.0.0.36:22-10.0.0.1:58406.service - OpenSSH per-connection server daemon (10.0.0.1:58406). Mar 14 00:20:25.938749 sshd[4642]: Accepted publickey for core from 10.0.0.1 port 58406 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:25.947783 sshd[4642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:25.968021 systemd-logind[1451]: New session 37 of user core. Mar 14 00:20:25.992894 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 14 00:20:26.455335 sshd[4642]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:26.488215 systemd-logind[1451]: Session 37 logged out. Waiting for processes to exit. Mar 14 00:20:26.494274 systemd[1]: sshd@36-10.0.0.36:22-10.0.0.1:58406.service: Deactivated successfully. Mar 14 00:20:26.499060 systemd[1]: session-37.scope: Deactivated successfully. Mar 14 00:20:26.510827 systemd-logind[1451]: Removed session 37. Mar 14 00:20:31.474485 systemd[1]: Started sshd@37-10.0.0.36:22-10.0.0.1:37278.service - OpenSSH per-connection server daemon (10.0.0.1:37278). Mar 14 00:20:31.571559 sshd[4659]: Accepted publickey for core from 10.0.0.1 port 37278 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:31.573301 sshd[4659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:31.602478 systemd-logind[1451]: New session 38 of user core. Mar 14 00:20:31.614143 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 14 00:20:31.900095 sshd[4659]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:31.913723 systemd[1]: sshd@37-10.0.0.36:22-10.0.0.1:37278.service: Deactivated successfully. Mar 14 00:20:31.926315 systemd[1]: session-38.scope: Deactivated successfully. Mar 14 00:20:31.932357 systemd-logind[1451]: Session 38 logged out. Waiting for processes to exit. Mar 14 00:20:31.960204 systemd-logind[1451]: Removed session 38. Mar 14 00:20:35.025820 kubelet[2647]: E0314 00:20:35.024654 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:20:35.381089 update_engine[1454]: I20260314 00:20:35.380024 1454 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:20:35.381807 update_engine[1454]: I20260314 00:20:35.381104 1454 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:20:35.381807 update_engine[1454]: I20260314 00:20:35.381685 1454 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:20:35.405899 update_engine[1454]: E20260314 00:20:35.405076 1454 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:20:35.405899 update_engine[1454]: I20260314 00:20:35.405226 1454 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 14 00:20:36.959383 systemd[1]: Started sshd@38-10.0.0.36:22-10.0.0.1:37294.service - OpenSSH per-connection server daemon (10.0.0.1:37294). Mar 14 00:20:37.106866 sshd[4675]: Accepted publickey for core from 10.0.0.1 port 37294 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:37.112861 sshd[4675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:37.129140 systemd-logind[1451]: New session 39 of user core. Mar 14 00:20:37.164603 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 14 00:20:37.546868 sshd[4675]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:37.557347 systemd[1]: sshd@38-10.0.0.36:22-10.0.0.1:37294.service: Deactivated successfully. Mar 14 00:20:37.560808 systemd[1]: session-39.scope: Deactivated successfully. Mar 14 00:20:37.565134 systemd-logind[1451]: Session 39 logged out. Waiting for processes to exit. Mar 14 00:20:37.569033 systemd-logind[1451]: Removed session 39. Mar 14 00:20:42.034229 kubelet[2647]: E0314 00:20:42.032190 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:20:42.602464 systemd[1]: Started sshd@39-10.0.0.36:22-10.0.0.1:48980.service - OpenSSH per-connection server daemon (10.0.0.1:48980). Mar 14 00:20:42.752787 sshd[4692]: Accepted publickey for core from 10.0.0.1 port 48980 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:42.759677 sshd[4692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:42.785051 systemd-logind[1451]: New session 40 of user core. Mar 14 00:20:42.793900 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 14 00:20:43.298736 sshd[4692]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:43.309249 systemd[1]: sshd@39-10.0.0.36:22-10.0.0.1:48980.service: Deactivated successfully. Mar 14 00:20:43.314445 systemd[1]: session-40.scope: Deactivated successfully. Mar 14 00:20:43.318977 systemd-logind[1451]: Session 40 logged out. Waiting for processes to exit. Mar 14 00:20:43.327124 systemd-logind[1451]: Removed session 40. Mar 14 00:20:45.381167 update_engine[1454]: I20260314 00:20:45.380437 1454 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:20:45.383837 update_engine[1454]: I20260314 00:20:45.381757 1454 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:20:45.383837 update_engine[1454]: I20260314 00:20:45.382236 1454 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:20:45.402199 update_engine[1454]: E20260314 00:20:45.401995 1454 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:20:45.402199 update_engine[1454]: I20260314 00:20:45.402168 1454 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 14 00:20:45.402199 update_engine[1454]: I20260314 00:20:45.402189 1454 omaha_request_action.cc:617] Omaha request response: Mar 14 00:20:45.402705 update_engine[1454]: E20260314 00:20:45.402316 1454 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 14 00:20:45.402705 update_engine[1454]: I20260314 00:20:45.402349 1454 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 14 00:20:45.402705 update_engine[1454]: I20260314 00:20:45.402444 1454 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 14 00:20:45.402705 update_engine[1454]: I20260314 00:20:45.402458 1454 update_attempter.cc:306] Processing Done. Mar 14 00:20:45.402705 update_engine[1454]: E20260314 00:20:45.402480 1454 update_attempter.cc:619] Update failed. Mar 14 00:20:45.402705 update_engine[1454]: I20260314 00:20:45.402617 1454 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 14 00:20:45.402705 update_engine[1454]: I20260314 00:20:45.402632 1454 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 14 00:20:45.403163 update_engine[1454]: I20260314 00:20:45.402709 1454 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 14 00:20:45.403163 update_engine[1454]: I20260314 00:20:45.402863 1454 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 14 00:20:45.403163 update_engine[1454]: I20260314 00:20:45.402900 1454 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 14 00:20:45.403163 update_engine[1454]: I20260314 00:20:45.402912 1454 omaha_request_action.cc:272] Request: Mar 14 00:20:45.403163 update_engine[1454]: Mar 14 00:20:45.403163 update_engine[1454]: Mar 14 00:20:45.403163 update_engine[1454]: Mar 14 00:20:45.403163 update_engine[1454]: Mar 14 00:20:45.403163 update_engine[1454]: Mar 14 00:20:45.403163 update_engine[1454]: Mar 14 00:20:45.403163 update_engine[1454]: I20260314 00:20:45.402927 1454 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:20:45.403956 update_engine[1454]: I20260314 00:20:45.403257 1454 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:20:45.403996 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 14 00:20:45.404700 update_engine[1454]: I20260314 00:20:45.404251 1454 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:20:45.425845 update_engine[1454]: E20260314 00:20:45.425447 1454 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:20:45.426004 update_engine[1454]: I20260314 00:20:45.425859 1454 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 14 00:20:45.426004 update_engine[1454]: I20260314 00:20:45.425884 1454 omaha_request_action.cc:617] Omaha request response: Mar 14 00:20:45.426004 update_engine[1454]: I20260314 00:20:45.425901 1454 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 14 00:20:45.426004 update_engine[1454]: I20260314 00:20:45.425916 1454 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 14 00:20:45.426004 update_engine[1454]: I20260314 00:20:45.425926 1454 update_attempter.cc:306] Processing Done. Mar 14 00:20:45.426004 update_engine[1454]: I20260314 00:20:45.425938 1454 update_attempter.cc:310] Error event sent. Mar 14 00:20:45.426205 update_engine[1454]: I20260314 00:20:45.426013 1454 update_check_scheduler.cc:74] Next update check in 46m14s Mar 14 00:20:45.428057 locksmithd[1498]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 14 00:20:48.372885 systemd[1]: Started sshd@40-10.0.0.36:22-10.0.0.1:48994.service - OpenSSH per-connection server daemon (10.0.0.1:48994). Mar 14 00:20:48.535273 sshd[4707]: Accepted publickey for core from 10.0.0.1 port 48994 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:48.540247 sshd[4707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:48.571114 systemd-logind[1451]: New session 41 of user core. Mar 14 00:20:48.587149 systemd[1]: Started session-41.scope - Session 41 of User core. Mar 14 00:20:49.026848 sshd[4707]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:49.053970 systemd[1]: sshd@40-10.0.0.36:22-10.0.0.1:48994.service: Deactivated successfully. Mar 14 00:20:49.066773 systemd[1]: session-41.scope: Deactivated successfully. Mar 14 00:20:49.076846 systemd-logind[1451]: Session 41 logged out. Waiting for processes to exit. Mar 14 00:20:49.094985 systemd[1]: Started sshd@41-10.0.0.36:22-10.0.0.1:48996.service - OpenSSH per-connection server daemon (10.0.0.1:48996). Mar 14 00:20:49.099089 systemd-logind[1451]: Removed session 41. Mar 14 00:20:49.352049 sshd[4722]: Accepted publickey for core from 10.0.0.1 port 48996 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:49.355918 sshd[4722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:49.397199 systemd-logind[1451]: New session 42 of user core. Mar 14 00:20:49.444886 systemd[1]: Started session-42.scope - Session 42 of User core. Mar 14 00:20:52.022716 kubelet[2647]: E0314 00:20:52.022436 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:20:53.469880 containerd[1471]: time="2026-03-14T00:20:53.466226148Z" level=info msg="StopContainer for \"64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b\" with timeout 30 (s)" Mar 14 00:20:53.477923 containerd[1471]: time="2026-03-14T00:20:53.475112915Z" level=info msg="Stop container \"64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b\" with signal terminated" Mar 14 00:20:53.700411 systemd[1]: cri-containerd-64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b.scope: Deactivated successfully. Mar 14 00:20:53.700906 systemd[1]: cri-containerd-64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b.scope: Consumed 3.384s CPU time. Mar 14 00:20:53.793213 containerd[1471]: time="2026-03-14T00:20:53.790988156Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:20:53.869599 containerd[1471]: time="2026-03-14T00:20:53.868640823Z" level=info msg="StopContainer for \"a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6\" with timeout 2 (s)" Mar 14 00:20:53.879917 containerd[1471]: time="2026-03-14T00:20:53.879178191Z" level=info msg="Stop container \"a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6\" with signal terminated" Mar 14 00:20:53.986721 systemd-networkd[1393]: lxc_health: Link DOWN Mar 14 00:20:53.986737 systemd-networkd[1393]: lxc_health: Lost carrier Mar 14 00:20:54.077087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b-rootfs.mount: Deactivated successfully. Mar 14 00:20:54.125234 systemd[1]: cri-containerd-a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6.scope: Deactivated successfully. Mar 14 00:20:54.131015 systemd[1]: cri-containerd-a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6.scope: Consumed 25.803s CPU time. Mar 14 00:20:54.200803 containerd[1471]: time="2026-03-14T00:20:54.191836982Z" level=info msg="shim disconnected" id=64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b namespace=k8s.io Mar 14 00:20:54.200803 containerd[1471]: time="2026-03-14T00:20:54.196308417Z" level=warning msg="cleaning up after shim disconnected" id=64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b namespace=k8s.io Mar 14 00:20:54.200803 containerd[1471]: time="2026-03-14T00:20:54.197307804Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:20:54.306432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6-rootfs.mount: Deactivated successfully. Mar 14 00:20:54.393994 containerd[1471]: time="2026-03-14T00:20:54.392249384Z" level=info msg="shim disconnected" id=a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6 namespace=k8s.io Mar 14 00:20:54.393994 containerd[1471]: time="2026-03-14T00:20:54.392405865Z" level=warning msg="cleaning up after shim disconnected" id=a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6 namespace=k8s.io Mar 14 00:20:54.393994 containerd[1471]: time="2026-03-14T00:20:54.392424759Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:20:54.537684 containerd[1471]: time="2026-03-14T00:20:54.533750399Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:20:54Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:20:54.587668 containerd[1471]: time="2026-03-14T00:20:54.577610628Z" level=info msg="StopContainer for \"a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6\" returns successfully" Mar 14 00:20:54.593620 containerd[1471]: time="2026-03-14T00:20:54.588690653Z" level=info msg="StopPodSandbox for \"6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47\"" Mar 14 00:20:54.596083 containerd[1471]: time="2026-03-14T00:20:54.596036320Z" level=info msg="Container to stop \"f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:20:54.596237 containerd[1471]: time="2026-03-14T00:20:54.596212568Z" level=info msg="Container to stop \"fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:20:54.596587 containerd[1471]: time="2026-03-14T00:20:54.596398283Z" level=info msg="Container to stop \"03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:20:54.596764 containerd[1471]: time="2026-03-14T00:20:54.596735921Z" level=info msg="Container to stop \"25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:20:54.596859 containerd[1471]: time="2026-03-14T00:20:54.596835425Z" level=info msg="Container to stop \"a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:20:54.615389 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47-shm.mount: Deactivated successfully. Mar 14 00:20:54.647030 containerd[1471]: time="2026-03-14T00:20:54.645705881Z" level=info msg="StopContainer for \"64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b\" returns successfully" Mar 14 00:20:54.654030 containerd[1471]: time="2026-03-14T00:20:54.651210195Z" level=info msg="StopPodSandbox for \"b7d5abff8c0371fcb817365f81d6bdfef7abee713ab5c30aca50ea91f231550e\"" Mar 14 00:20:54.654030 containerd[1471]: time="2026-03-14T00:20:54.652721513Z" level=info msg="Container to stop \"64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:20:54.668180 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b7d5abff8c0371fcb817365f81d6bdfef7abee713ab5c30aca50ea91f231550e-shm.mount: Deactivated successfully. Mar 14 00:20:54.701135 systemd[1]: cri-containerd-6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47.scope: Deactivated successfully. Mar 14 00:20:54.795458 systemd[1]: cri-containerd-b7d5abff8c0371fcb817365f81d6bdfef7abee713ab5c30aca50ea91f231550e.scope: Deactivated successfully. Mar 14 00:20:54.946123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47-rootfs.mount: Deactivated successfully. Mar 14 00:20:54.995049 containerd[1471]: time="2026-03-14T00:20:54.992007038Z" level=info msg="shim disconnected" id=6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47 namespace=k8s.io Mar 14 00:20:54.995049 containerd[1471]: time="2026-03-14T00:20:54.992086425Z" level=warning msg="cleaning up after shim disconnected" id=6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47 namespace=k8s.io Mar 14 00:20:54.995049 containerd[1471]: time="2026-03-14T00:20:54.992100291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:20:55.076965 sshd[4722]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:55.110216 systemd[1]: sshd@41-10.0.0.36:22-10.0.0.1:48996.service: Deactivated successfully. Mar 14 00:20:55.128726 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b7d5abff8c0371fcb817365f81d6bdfef7abee713ab5c30aca50ea91f231550e-rootfs.mount: Deactivated successfully. Mar 14 00:20:55.136917 systemd[1]: session-42.scope: Deactivated successfully. Mar 14 00:20:55.137262 systemd[1]: session-42.scope: Consumed 1.650s CPU time. Mar 14 00:20:55.141004 systemd-logind[1451]: Session 42 logged out. Waiting for processes to exit. Mar 14 00:20:55.188174 systemd[1]: Started sshd@42-10.0.0.36:22-10.0.0.1:55942.service - OpenSSH per-connection server daemon (10.0.0.1:55942). Mar 14 00:20:55.195699 containerd[1471]: time="2026-03-14T00:20:55.195138440Z" level=info msg="shim disconnected" id=b7d5abff8c0371fcb817365f81d6bdfef7abee713ab5c30aca50ea91f231550e namespace=k8s.io Mar 14 00:20:55.195699 containerd[1471]: time="2026-03-14T00:20:55.195220293Z" level=warning msg="cleaning up after shim disconnected" id=b7d5abff8c0371fcb817365f81d6bdfef7abee713ab5c30aca50ea91f231550e namespace=k8s.io Mar 14 00:20:55.195699 containerd[1471]: time="2026-03-14T00:20:55.195236011Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:20:55.195955 systemd-logind[1451]: Removed session 42. Mar 14 00:20:55.225036 containerd[1471]: time="2026-03-14T00:20:55.224715144Z" level=info msg="TearDown network for sandbox \"6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47\" successfully" Mar 14 00:20:55.225036 containerd[1471]: time="2026-03-14T00:20:55.224767182Z" level=info msg="StopPodSandbox for \"6ed8dd80eefefa5b6128eec143a2939f727db57f867281d6404023bf1f566d47\" returns successfully" Mar 14 00:20:55.273726 sshd[4859]: Accepted publickey for core from 10.0.0.1 port 55942 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:55.282073 sshd[4859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:55.331251 containerd[1471]: time="2026-03-14T00:20:55.331195554Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:20:55Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:20:55.349935 systemd-logind[1451]: New session 43 of user core. Mar 14 00:20:55.358653 containerd[1471]: time="2026-03-14T00:20:55.352057281Z" level=info msg="TearDown network for sandbox \"b7d5abff8c0371fcb817365f81d6bdfef7abee713ab5c30aca50ea91f231550e\" successfully" Mar 14 00:20:55.358653 containerd[1471]: time="2026-03-14T00:20:55.354920765Z" level=info msg="StopPodSandbox for \"b7d5abff8c0371fcb817365f81d6bdfef7abee713ab5c30aca50ea91f231550e\" returns successfully" Mar 14 00:20:55.406639 kubelet[2647]: I0314 00:20:55.406270 2647 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-bpf-maps" pod "d26d8103-c5fe-4eb0-88b3-058ee820f281" (UID: "d26d8103-c5fe-4eb0-88b3-058ee820f281"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:20:55.409671 kubelet[2647]: I0314 00:20:55.409295 2647 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-bpf-maps\") pod \"d26d8103-c5fe-4eb0-88b3-058ee820f281\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " Mar 14 00:20:55.418704 kubelet[2647]: I0314 00:20:55.416201 2647 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/d26d8103-c5fe-4eb0-88b3-058ee820f281-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d26d8103-c5fe-4eb0-88b3-058ee820f281-cilium-config-path\") pod \"d26d8103-c5fe-4eb0-88b3-058ee820f281\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " Mar 14 00:20:55.418704 kubelet[2647]: I0314 00:20:55.416263 2647 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-cilium-cgroup\") pod \"d26d8103-c5fe-4eb0-88b3-058ee820f281\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " Mar 14 00:20:55.418704 kubelet[2647]: I0314 00:20:55.416766 2647 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-host-proc-sys-kernel\") pod \"d26d8103-c5fe-4eb0-88b3-058ee820f281\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " Mar 14 00:20:55.418704 kubelet[2647]: I0314 00:20:55.416796 2647 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-cilium-run\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-cilium-run\") pod \"d26d8103-c5fe-4eb0-88b3-058ee820f281\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " Mar 14 00:20:55.418704 kubelet[2647]: I0314 00:20:55.416820 2647 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-lib-modules\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-lib-modules\") pod \"d26d8103-c5fe-4eb0-88b3-058ee820f281\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " Mar 14 00:20:55.410912 systemd[1]: Started session-43.scope - Session 43 of User core. Mar 14 00:20:55.419032 kubelet[2647]: I0314 00:20:55.416842 2647 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-xtables-lock\") pod \"d26d8103-c5fe-4eb0-88b3-058ee820f281\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " Mar 14 00:20:55.419032 kubelet[2647]: I0314 00:20:55.416870 2647 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/d26d8103-c5fe-4eb0-88b3-058ee820f281-clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d26d8103-c5fe-4eb0-88b3-058ee820f281-clustermesh-secrets\") pod \"d26d8103-c5fe-4eb0-88b3-058ee820f281\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " Mar 14 00:20:55.419032 kubelet[2647]: I0314 00:20:55.416894 2647 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-hostproc\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-hostproc\") pod \"d26d8103-c5fe-4eb0-88b3-058ee820f281\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " Mar 14 00:20:55.419032 kubelet[2647]: I0314 00:20:55.416920 2647 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/d26d8103-c5fe-4eb0-88b3-058ee820f281-hubble-tls\" (UniqueName: \"kubernetes.io/projected/d26d8103-c5fe-4eb0-88b3-058ee820f281-hubble-tls\") pod \"d26d8103-c5fe-4eb0-88b3-058ee820f281\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " Mar 14 00:20:55.419032 kubelet[2647]: I0314 00:20:55.417010 2647 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-etc-cni-netd\") pod \"d26d8103-c5fe-4eb0-88b3-058ee820f281\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " Mar 14 00:20:55.419213 kubelet[2647]: I0314 00:20:55.417037 2647 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/d26d8103-c5fe-4eb0-88b3-058ee820f281-kube-api-access-z4rhl\" (UniqueName: \"kubernetes.io/projected/d26d8103-c5fe-4eb0-88b3-058ee820f281-kube-api-access-z4rhl\") pod \"d26d8103-c5fe-4eb0-88b3-058ee820f281\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " Mar 14 00:20:55.419213 kubelet[2647]: I0314 00:20:55.417057 2647 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-cni-path\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-cni-path\") pod \"d26d8103-c5fe-4eb0-88b3-058ee820f281\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " Mar 14 00:20:55.419213 kubelet[2647]: I0314 00:20:55.417081 2647 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-host-proc-sys-net\") pod \"d26d8103-c5fe-4eb0-88b3-058ee820f281\" (UID: \"d26d8103-c5fe-4eb0-88b3-058ee820f281\") " Mar 14 00:20:55.419213 kubelet[2647]: I0314 00:20:55.417165 2647 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-host-proc-sys-net" pod "d26d8103-c5fe-4eb0-88b3-058ee820f281" (UID: "d26d8103-c5fe-4eb0-88b3-058ee820f281"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:20:55.470656 kubelet[2647]: I0314 00:20:55.431000 2647 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-etc-cni-netd" pod "d26d8103-c5fe-4eb0-88b3-058ee820f281" (UID: "d26d8103-c5fe-4eb0-88b3-058ee820f281"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:20:55.470656 kubelet[2647]: I0314 00:20:55.431061 2647 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-hostproc" pod "d26d8103-c5fe-4eb0-88b3-058ee820f281" (UID: "d26d8103-c5fe-4eb0-88b3-058ee820f281"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:20:55.470656 kubelet[2647]: I0314 00:20:55.439437 2647 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-cilium-cgroup" pod "d26d8103-c5fe-4eb0-88b3-058ee820f281" (UID: "d26d8103-c5fe-4eb0-88b3-058ee820f281"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:20:55.470656 kubelet[2647]: I0314 00:20:55.439623 2647 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-host-proc-sys-kernel" pod "d26d8103-c5fe-4eb0-88b3-058ee820f281" (UID: "d26d8103-c5fe-4eb0-88b3-058ee820f281"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:20:55.470656 kubelet[2647]: I0314 00:20:55.439666 2647 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-cilium-run" pod "d26d8103-c5fe-4eb0-88b3-058ee820f281" (UID: "d26d8103-c5fe-4eb0-88b3-058ee820f281"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:20:55.476003 kubelet[2647]: I0314 00:20:55.439693 2647 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-lib-modules" pod "d26d8103-c5fe-4eb0-88b3-058ee820f281" (UID: "d26d8103-c5fe-4eb0-88b3-058ee820f281"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:20:55.476003 kubelet[2647]: I0314 00:20:55.439720 2647 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-xtables-lock" pod "d26d8103-c5fe-4eb0-88b3-058ee820f281" (UID: "d26d8103-c5fe-4eb0-88b3-058ee820f281"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:20:55.476003 kubelet[2647]: I0314 00:20:55.452636 2647 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-cni-path" pod "d26d8103-c5fe-4eb0-88b3-058ee820f281" (UID: "d26d8103-c5fe-4eb0-88b3-058ee820f281"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:20:55.477706 kubelet[2647]: I0314 00:20:55.477658 2647 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d26d8103-c5fe-4eb0-88b3-058ee820f281-cilium-config-path" pod "d26d8103-c5fe-4eb0-88b3-058ee820f281" (UID: "d26d8103-c5fe-4eb0-88b3-058ee820f281"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:20:55.489932 systemd[1]: var-lib-kubelet-pods-d26d8103\x2dc5fe\x2d4eb0\x2d88b3\x2d058ee820f281-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz4rhl.mount: Deactivated successfully. Mar 14 00:20:55.507193 systemd[1]: var-lib-kubelet-pods-d26d8103\x2dc5fe\x2d4eb0\x2d88b3\x2d058ee820f281-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 14 00:20:55.517174 kubelet[2647]: I0314 00:20:55.516018 2647 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d26d8103-c5fe-4eb0-88b3-058ee820f281-clustermesh-secrets" pod "d26d8103-c5fe-4eb0-88b3-058ee820f281" (UID: "d26d8103-c5fe-4eb0-88b3-058ee820f281"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:20:55.523848 systemd[1]: var-lib-kubelet-pods-d26d8103\x2dc5fe\x2d4eb0\x2d88b3\x2d058ee820f281-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 14 00:20:55.527926 kubelet[2647]: I0314 00:20:55.527900 2647 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 14 00:20:55.528881 kubelet[2647]: I0314 00:20:55.528638 2647 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d26d8103-c5fe-4eb0-88b3-058ee820f281-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 14 00:20:55.528881 kubelet[2647]: I0314 00:20:55.528751 2647 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 14 00:20:55.528881 kubelet[2647]: I0314 00:20:55.528764 2647 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 14 00:20:55.528881 kubelet[2647]: I0314 00:20:55.528776 2647 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 14 00:20:55.528881 kubelet[2647]: I0314 00:20:55.528787 2647 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 14 00:20:55.528881 kubelet[2647]: I0314 00:20:55.528798 2647 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 14 00:20:55.528881 kubelet[2647]: I0314 00:20:55.528809 2647 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d26d8103-c5fe-4eb0-88b3-058ee820f281-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 14 00:20:55.528881 kubelet[2647]: I0314 00:20:55.528820 2647 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 14 00:20:55.529805 kubelet[2647]: I0314 00:20:55.528830 2647 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 14 00:20:55.529805 kubelet[2647]: I0314 00:20:55.528844 2647 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 14 00:20:55.529805 kubelet[2647]: I0314 00:20:55.528856 2647 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d26d8103-c5fe-4eb0-88b3-058ee820f281-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 14 00:20:55.553120 kubelet[2647]: I0314 00:20:55.552166 2647 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d26d8103-c5fe-4eb0-88b3-058ee820f281-kube-api-access-z4rhl" pod "d26d8103-c5fe-4eb0-88b3-058ee820f281" (UID: "d26d8103-c5fe-4eb0-88b3-058ee820f281"). InnerVolumeSpecName "kube-api-access-z4rhl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:20:55.559993 kubelet[2647]: I0314 00:20:55.559373 2647 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d26d8103-c5fe-4eb0-88b3-058ee820f281-hubble-tls" pod "d26d8103-c5fe-4eb0-88b3-058ee820f281" (UID: "d26d8103-c5fe-4eb0-88b3-058ee820f281"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:20:55.630188 kubelet[2647]: I0314 00:20:55.629792 2647 scope.go:122] "RemoveContainer" containerID="a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6" Mar 14 00:20:55.634750 kubelet[2647]: I0314 00:20:55.631790 2647 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/fd7934fa-f75a-471e-9a55-ea1d310c5afc-kube-api-access-sj9n8\" (UniqueName: \"kubernetes.io/projected/fd7934fa-f75a-471e-9a55-ea1d310c5afc-kube-api-access-sj9n8\") pod \"fd7934fa-f75a-471e-9a55-ea1d310c5afc\" (UID: \"fd7934fa-f75a-471e-9a55-ea1d310c5afc\") " Mar 14 00:20:55.634750 kubelet[2647]: I0314 00:20:55.631840 2647 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/fd7934fa-f75a-471e-9a55-ea1d310c5afc-cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd7934fa-f75a-471e-9a55-ea1d310c5afc-cilium-config-path\") pod \"fd7934fa-f75a-471e-9a55-ea1d310c5afc\" (UID: \"fd7934fa-f75a-471e-9a55-ea1d310c5afc\") " Mar 14 00:20:55.634750 kubelet[2647]: I0314 00:20:55.631964 2647 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d26d8103-c5fe-4eb0-88b3-058ee820f281-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 14 00:20:55.634750 kubelet[2647]: I0314 00:20:55.631987 2647 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z4rhl\" (UniqueName: \"kubernetes.io/projected/d26d8103-c5fe-4eb0-88b3-058ee820f281-kube-api-access-z4rhl\") on node \"localhost\" DevicePath \"\"" Mar 14 00:20:55.652056 kubelet[2647]: I0314 00:20:55.651911 2647 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd7934fa-f75a-471e-9a55-ea1d310c5afc-cilium-config-path" pod "fd7934fa-f75a-471e-9a55-ea1d310c5afc" (UID: "fd7934fa-f75a-471e-9a55-ea1d310c5afc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:20:55.659697 containerd[1471]: time="2026-03-14T00:20:55.658101725Z" level=info msg="RemoveContainer for \"a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6\"" Mar 14 00:20:55.660171 kubelet[2647]: I0314 00:20:55.658953 2647 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd7934fa-f75a-471e-9a55-ea1d310c5afc-kube-api-access-sj9n8" pod "fd7934fa-f75a-471e-9a55-ea1d310c5afc" (UID: "fd7934fa-f75a-471e-9a55-ea1d310c5afc"). InnerVolumeSpecName "kube-api-access-sj9n8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:20:55.688250 systemd[1]: Removed slice kubepods-burstable-podd26d8103_c5fe_4eb0_88b3_058ee820f281.slice - libcontainer container kubepods-burstable-podd26d8103_c5fe_4eb0_88b3_058ee820f281.slice. Mar 14 00:20:55.688482 systemd[1]: kubepods-burstable-podd26d8103_c5fe_4eb0_88b3_058ee820f281.slice: Consumed 26.092s CPU time. Mar 14 00:20:55.706018 containerd[1471]: time="2026-03-14T00:20:55.703234066Z" level=info msg="RemoveContainer for \"a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6\" returns successfully" Mar 14 00:20:55.708192 kubelet[2647]: I0314 00:20:55.707774 2647 scope.go:122] "RemoveContainer" containerID="25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704" Mar 14 00:20:55.734838 kubelet[2647]: I0314 00:20:55.732684 2647 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sj9n8\" (UniqueName: \"kubernetes.io/projected/fd7934fa-f75a-471e-9a55-ea1d310c5afc-kube-api-access-sj9n8\") on node \"localhost\" DevicePath \"\"" Mar 14 00:20:55.734838 kubelet[2647]: I0314 00:20:55.733996 2647 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd7934fa-f75a-471e-9a55-ea1d310c5afc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 14 00:20:55.748835 containerd[1471]: time="2026-03-14T00:20:55.745022411Z" level=info msg="RemoveContainer for \"25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704\"" Mar 14 00:20:55.747892 systemd[1]: Removed slice kubepods-besteffort-podfd7934fa_f75a_471e_9a55_ea1d310c5afc.slice - libcontainer container kubepods-besteffort-podfd7934fa_f75a_471e_9a55_ea1d310c5afc.slice. Mar 14 00:20:55.751129 systemd[1]: kubepods-besteffort-podfd7934fa_f75a_471e_9a55_ea1d310c5afc.slice: Consumed 3.491s CPU time. Mar 14 00:20:55.859675 containerd[1471]: time="2026-03-14T00:20:55.858917179Z" level=info msg="RemoveContainer for \"25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704\" returns successfully" Mar 14 00:20:55.859841 kubelet[2647]: I0314 00:20:55.859279 2647 scope.go:122] "RemoveContainer" containerID="03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf" Mar 14 00:20:55.871638 containerd[1471]: time="2026-03-14T00:20:55.869746977Z" level=info msg="RemoveContainer for \"03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf\"" Mar 14 00:20:55.903203 containerd[1471]: time="2026-03-14T00:20:55.903123453Z" level=info msg="RemoveContainer for \"03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf\" returns successfully" Mar 14 00:20:55.934398 kubelet[2647]: I0314 00:20:55.932665 2647 scope.go:122] "RemoveContainer" containerID="f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be" Mar 14 00:20:55.969867 systemd[1]: var-lib-kubelet-pods-fd7934fa\x2df75a\x2d471e\x2d9a55\x2dea1d310c5afc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsj9n8.mount: Deactivated successfully. Mar 14 00:20:55.977029 containerd[1471]: time="2026-03-14T00:20:55.976186114Z" level=info msg="RemoveContainer for \"f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be\"" Mar 14 00:20:55.999392 containerd[1471]: time="2026-03-14T00:20:55.997084027Z" level=info msg="RemoveContainer for \"f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be\" returns successfully" Mar 14 00:20:55.999700 kubelet[2647]: I0314 00:20:55.997699 2647 scope.go:122] "RemoveContainer" containerID="fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145" Mar 14 00:20:56.007754 containerd[1471]: time="2026-03-14T00:20:56.006768709Z" level=info msg="RemoveContainer for \"fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145\"" Mar 14 00:20:56.030174 containerd[1471]: time="2026-03-14T00:20:56.029874711Z" level=info msg="RemoveContainer for \"fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145\" returns successfully" Mar 14 00:20:56.030409 kubelet[2647]: I0314 00:20:56.030228 2647 scope.go:122] "RemoveContainer" containerID="a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6" Mar 14 00:20:56.040106 containerd[1471]: time="2026-03-14T00:20:56.038782128Z" level=error msg="ContainerStatus for \"a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6\": not found" Mar 14 00:20:56.040106 containerd[1471]: time="2026-03-14T00:20:56.039831597Z" level=error msg="ContainerStatus for \"25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704\": not found" Mar 14 00:20:56.040282 kubelet[2647]: E0314 00:20:56.039239 2647 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6\": not found" containerID="a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6" Mar 14 00:20:56.040282 kubelet[2647]: I0314 00:20:56.039382 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6"} err="failed to get container status \"a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8261bc7d80c3cf3a800a1a4f77a8859dc0e05f910ab9e102579be9fd5b646c6\": not found" Mar 14 00:20:56.040282 kubelet[2647]: I0314 00:20:56.039486 2647 scope.go:122] "RemoveContainer" containerID="25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704" Mar 14 00:20:56.040282 kubelet[2647]: E0314 00:20:56.039963 2647 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704\": not found" containerID="25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704" Mar 14 00:20:56.040282 kubelet[2647]: I0314 00:20:56.039992 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704"} err="failed to get container status \"25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704\": rpc error: code = NotFound desc = an error occurred when try to find container \"25d14e40c279e6a9c1d2949b6778cdc3f3dad4e92c6e39a058f69f37017fa704\": not found" Mar 14 00:20:56.040282 kubelet[2647]: I0314 00:20:56.040013 2647 scope.go:122] "RemoveContainer" containerID="03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf" Mar 14 00:20:56.041398 containerd[1471]: time="2026-03-14T00:20:56.040212685Z" level=error msg="ContainerStatus for \"03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf\": not found" Mar 14 00:20:56.041465 kubelet[2647]: E0314 00:20:56.041245 2647 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf\": not found" containerID="03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf" Mar 14 00:20:56.041465 kubelet[2647]: I0314 00:20:56.041284 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf"} err="failed to get container status \"03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf\": rpc error: code = NotFound desc = an error occurred when try to find container \"03e3806f512f2f8777fd45c76fee1cebe45e29fafb859dc78e0805afb472e3bf\": not found" Mar 14 00:20:56.041465 kubelet[2647]: I0314 00:20:56.041308 2647 scope.go:122] "RemoveContainer" containerID="f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be" Mar 14 00:20:56.041901 containerd[1471]: time="2026-03-14T00:20:56.041804022Z" level=error msg="ContainerStatus for \"f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be\": not found" Mar 14 00:20:56.041957 kubelet[2647]: E0314 00:20:56.041940 2647 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be\": not found" containerID="f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be" Mar 14 00:20:56.041998 kubelet[2647]: I0314 00:20:56.041965 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be"} err="failed to get container status \"f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be\": rpc error: code = NotFound desc = an error occurred when try to find container \"f131b0804d20293c009d60dc5ef613816ee5d81fc574e04363dfa541c326d5be\": not found" Mar 14 00:20:56.041998 kubelet[2647]: I0314 00:20:56.041983 2647 scope.go:122] "RemoveContainer" containerID="fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145" Mar 14 00:20:56.045823 containerd[1471]: time="2026-03-14T00:20:56.042260920Z" level=error msg="ContainerStatus for \"fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145\": not found" Mar 14 00:20:56.050133 kubelet[2647]: E0314 00:20:56.047753 2647 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145\": not found" containerID="fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145" Mar 14 00:20:56.050133 kubelet[2647]: I0314 00:20:56.047794 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145"} err="failed to get container status \"fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145\": rpc error: code = NotFound desc = an error occurred when try to find container \"fcc28ed0bde498957e227aec98d5e40ee9dbc562bc31c9ae14d3d81e205f1145\": not found" Mar 14 00:20:56.050133 kubelet[2647]: I0314 00:20:56.047820 2647 scope.go:122] "RemoveContainer" containerID="64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b" Mar 14 00:20:56.058188 containerd[1471]: time="2026-03-14T00:20:56.056070771Z" level=info msg="RemoveContainer for \"64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b\"" Mar 14 00:20:56.071016 containerd[1471]: time="2026-03-14T00:20:56.070967837Z" level=info msg="RemoveContainer for \"64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b\" returns successfully" Mar 14 00:20:56.077814 kubelet[2647]: I0314 00:20:56.077671 2647 scope.go:122] "RemoveContainer" containerID="64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b" Mar 14 00:20:56.079614 containerd[1471]: time="2026-03-14T00:20:56.078214098Z" level=error msg="ContainerStatus for \"64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b\": not found" Mar 14 00:20:56.080036 kubelet[2647]: E0314 00:20:56.079870 2647 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b\": not found" containerID="64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b" Mar 14 00:20:56.080036 kubelet[2647]: I0314 00:20:56.079914 2647 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b"} err="failed to get container status \"64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b\": rpc error: code = NotFound desc = an error occurred when try to find container \"64981327faa03cecc5ccdd9b3bf48063ebc41e15ef22eb2302ca65702711036b\": not found" Mar 14 00:20:57.056648 kubelet[2647]: I0314 00:20:57.056008 2647 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d26d8103-c5fe-4eb0-88b3-058ee820f281" path="/var/lib/kubelet/pods/d26d8103-c5fe-4eb0-88b3-058ee820f281/volumes" Mar 14 00:20:57.073951 kubelet[2647]: I0314 00:20:57.072955 2647 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fd7934fa-f75a-471e-9a55-ea1d310c5afc" path="/var/lib/kubelet/pods/fd7934fa-f75a-471e-9a55-ea1d310c5afc/volumes" Mar 14 00:20:57.206937 kubelet[2647]: E0314 00:20:57.206188 2647 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 00:20:58.792447 sshd[4859]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:58.823127 systemd[1]: sshd@42-10.0.0.36:22-10.0.0.1:55942.service: Deactivated successfully. Mar 14 00:20:58.835253 systemd[1]: session-43.scope: Deactivated successfully. Mar 14 00:20:58.849103 systemd[1]: session-43.scope: Consumed 1.664s CPU time. Mar 14 00:20:58.856172 systemd-logind[1451]: Session 43 logged out. Waiting for processes to exit. Mar 14 00:20:58.878985 systemd[1]: Started sshd@43-10.0.0.36:22-10.0.0.1:55946.service - OpenSSH per-connection server daemon (10.0.0.1:55946). Mar 14 00:20:58.889103 systemd-logind[1451]: Removed session 43. Mar 14 00:20:59.043974 kubelet[2647]: I0314 00:20:59.042863 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0df40a62-b0e4-432a-9ce2-074a2cd2d372-etc-cni-netd\") pod \"cilium-222qx\" (UID: \"0df40a62-b0e4-432a-9ce2-074a2cd2d372\") " pod="kube-system/cilium-222qx" Mar 14 00:20:59.043974 kubelet[2647]: I0314 00:20:59.043173 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0df40a62-b0e4-432a-9ce2-074a2cd2d372-lib-modules\") pod \"cilium-222qx\" (UID: \"0df40a62-b0e4-432a-9ce2-074a2cd2d372\") " pod="kube-system/cilium-222qx" Mar 14 00:20:59.043974 kubelet[2647]: I0314 00:20:59.043202 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0df40a62-b0e4-432a-9ce2-074a2cd2d372-xtables-lock\") pod \"cilium-222qx\" (UID: \"0df40a62-b0e4-432a-9ce2-074a2cd2d372\") " pod="kube-system/cilium-222qx" Mar 14 00:20:59.043974 kubelet[2647]: I0314 00:20:59.043227 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0df40a62-b0e4-432a-9ce2-074a2cd2d372-clustermesh-secrets\") pod \"cilium-222qx\" (UID: \"0df40a62-b0e4-432a-9ce2-074a2cd2d372\") " pod="kube-system/cilium-222qx" Mar 14 00:20:59.043974 kubelet[2647]: I0314 00:20:59.043252 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0df40a62-b0e4-432a-9ce2-074a2cd2d372-host-proc-sys-net\") pod \"cilium-222qx\" (UID: \"0df40a62-b0e4-432a-9ce2-074a2cd2d372\") " pod="kube-system/cilium-222qx" Mar 14 00:20:59.043974 kubelet[2647]: I0314 00:20:59.043277 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0df40a62-b0e4-432a-9ce2-074a2cd2d372-host-proc-sys-kernel\") pod \"cilium-222qx\" (UID: \"0df40a62-b0e4-432a-9ce2-074a2cd2d372\") " pod="kube-system/cilium-222qx" Mar 14 00:20:59.052178 kubelet[2647]: I0314 00:20:59.043872 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0df40a62-b0e4-432a-9ce2-074a2cd2d372-hubble-tls\") pod \"cilium-222qx\" (UID: \"0df40a62-b0e4-432a-9ce2-074a2cd2d372\") " pod="kube-system/cilium-222qx" Mar 14 00:20:59.052178 kubelet[2647]: I0314 00:20:59.043912 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhsjv\" (UniqueName: \"kubernetes.io/projected/0df40a62-b0e4-432a-9ce2-074a2cd2d372-kube-api-access-hhsjv\") pod \"cilium-222qx\" (UID: \"0df40a62-b0e4-432a-9ce2-074a2cd2d372\") " pod="kube-system/cilium-222qx" Mar 14 00:20:59.052178 kubelet[2647]: I0314 00:20:59.043959 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0df40a62-b0e4-432a-9ce2-074a2cd2d372-cilium-run\") pod \"cilium-222qx\" (UID: \"0df40a62-b0e4-432a-9ce2-074a2cd2d372\") " pod="kube-system/cilium-222qx" Mar 14 00:20:59.052178 kubelet[2647]: I0314 00:20:59.043980 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0df40a62-b0e4-432a-9ce2-074a2cd2d372-hostproc\") pod \"cilium-222qx\" (UID: \"0df40a62-b0e4-432a-9ce2-074a2cd2d372\") " pod="kube-system/cilium-222qx" Mar 14 00:20:59.052178 kubelet[2647]: I0314 00:20:59.044024 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0df40a62-b0e4-432a-9ce2-074a2cd2d372-cilium-cgroup\") pod \"cilium-222qx\" (UID: \"0df40a62-b0e4-432a-9ce2-074a2cd2d372\") " pod="kube-system/cilium-222qx" Mar 14 00:20:59.052178 kubelet[2647]: I0314 00:20:59.044050 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0df40a62-b0e4-432a-9ce2-074a2cd2d372-cilium-ipsec-secrets\") pod \"cilium-222qx\" (UID: \"0df40a62-b0e4-432a-9ce2-074a2cd2d372\") " pod="kube-system/cilium-222qx" Mar 14 00:20:59.052661 kubelet[2647]: I0314 00:20:59.044275 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0df40a62-b0e4-432a-9ce2-074a2cd2d372-cni-path\") pod \"cilium-222qx\" (UID: \"0df40a62-b0e4-432a-9ce2-074a2cd2d372\") " pod="kube-system/cilium-222qx" Mar 14 00:20:59.052661 kubelet[2647]: I0314 00:20:59.044463 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0df40a62-b0e4-432a-9ce2-074a2cd2d372-cilium-config-path\") pod \"cilium-222qx\" (UID: \"0df40a62-b0e4-432a-9ce2-074a2cd2d372\") " pod="kube-system/cilium-222qx" Mar 14 00:20:59.062632 systemd[1]: Created slice kubepods-burstable-pod0df40a62_b0e4_432a_9ce2_074a2cd2d372.slice - libcontainer container kubepods-burstable-pod0df40a62_b0e4_432a_9ce2_074a2cd2d372.slice. Mar 14 00:20:59.062902 kubelet[2647]: I0314 00:20:59.061715 2647 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0df40a62-b0e4-432a-9ce2-074a2cd2d372-bpf-maps\") pod \"cilium-222qx\" (UID: \"0df40a62-b0e4-432a-9ce2-074a2cd2d372\") " pod="kube-system/cilium-222qx" Mar 14 00:20:59.145703 sshd[4894]: Accepted publickey for core from 10.0.0.1 port 55946 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:59.151759 sshd[4894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:59.198144 systemd-logind[1451]: New session 44 of user core. Mar 14 00:20:59.210918 systemd[1]: Started session-44.scope - Session 44 of User core. Mar 14 00:20:59.331168 sshd[4894]: pam_unix(sshd:session): session closed for user core Mar 14 00:20:59.357071 systemd[1]: sshd@43-10.0.0.36:22-10.0.0.1:55946.service: Deactivated successfully. Mar 14 00:20:59.374079 systemd[1]: session-44.scope: Deactivated successfully. Mar 14 00:20:59.388978 kubelet[2647]: E0314 00:20:59.387111 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:20:59.387153 systemd-logind[1451]: Session 44 logged out. Waiting for processes to exit. Mar 14 00:20:59.389985 containerd[1471]: time="2026-03-14T00:20:59.389937722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-222qx,Uid:0df40a62-b0e4-432a-9ce2-074a2cd2d372,Namespace:kube-system,Attempt:0,}" Mar 14 00:20:59.427839 systemd[1]: Started sshd@44-10.0.0.36:22-10.0.0.1:55960.service - OpenSSH per-connection server daemon (10.0.0.1:55960). Mar 14 00:20:59.448481 systemd-logind[1451]: Removed session 44. Mar 14 00:20:59.563724 sshd[4906]: Accepted publickey for core from 10.0.0.1 port 55960 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:20:59.567828 sshd[4906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:20:59.586352 systemd-logind[1451]: New session 45 of user core. Mar 14 00:20:59.592965 containerd[1471]: time="2026-03-14T00:20:59.591931768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:20:59.592965 containerd[1471]: time="2026-03-14T00:20:59.592084642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:20:59.592965 containerd[1471]: time="2026-03-14T00:20:59.592151505Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:20:59.592965 containerd[1471]: time="2026-03-14T00:20:59.592370833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:20:59.602450 systemd[1]: Started session-45.scope - Session 45 of User core. Mar 14 00:20:59.709414 systemd[1]: Started cri-containerd-67f3662ced155564222febc371e2a5e0d1c20a3d602198d888586dc707bafa5d.scope - libcontainer container 67f3662ced155564222febc371e2a5e0d1c20a3d602198d888586dc707bafa5d. Mar 14 00:20:59.930109 containerd[1471]: time="2026-03-14T00:20:59.928850155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-222qx,Uid:0df40a62-b0e4-432a-9ce2-074a2cd2d372,Namespace:kube-system,Attempt:0,} returns sandbox id \"67f3662ced155564222febc371e2a5e0d1c20a3d602198d888586dc707bafa5d\"" Mar 14 00:20:59.946674 kubelet[2647]: E0314 00:20:59.934933 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:20:59.974751 containerd[1471]: time="2026-03-14T00:20:59.973284645Z" level=info msg="CreateContainer within sandbox \"67f3662ced155564222febc371e2a5e0d1c20a3d602198d888586dc707bafa5d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:21:00.151982 containerd[1471]: time="2026-03-14T00:21:00.151075084Z" level=info msg="CreateContainer within sandbox \"67f3662ced155564222febc371e2a5e0d1c20a3d602198d888586dc707bafa5d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8d8587deb10d413d5d100e48c95465ef6232ebd006d5750d1acd16595a801041\"" Mar 14 00:21:00.165724 containerd[1471]: time="2026-03-14T00:21:00.163869784Z" level=info msg="StartContainer for \"8d8587deb10d413d5d100e48c95465ef6232ebd006d5750d1acd16595a801041\"" Mar 14 00:21:00.380838 systemd[1]: Started cri-containerd-8d8587deb10d413d5d100e48c95465ef6232ebd006d5750d1acd16595a801041.scope - libcontainer container 8d8587deb10d413d5d100e48c95465ef6232ebd006d5750d1acd16595a801041. Mar 14 00:21:00.572151 containerd[1471]: time="2026-03-14T00:21:00.571986454Z" level=info msg="StartContainer for \"8d8587deb10d413d5d100e48c95465ef6232ebd006d5750d1acd16595a801041\" returns successfully" Mar 14 00:21:00.637038 systemd[1]: cri-containerd-8d8587deb10d413d5d100e48c95465ef6232ebd006d5750d1acd16595a801041.scope: Deactivated successfully. Mar 14 00:21:00.753227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d8587deb10d413d5d100e48c95465ef6232ebd006d5750d1acd16595a801041-rootfs.mount: Deactivated successfully. Mar 14 00:21:00.762413 kubelet[2647]: E0314 00:21:00.761213 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:00.810754 containerd[1471]: time="2026-03-14T00:21:00.808949494Z" level=info msg="shim disconnected" id=8d8587deb10d413d5d100e48c95465ef6232ebd006d5750d1acd16595a801041 namespace=k8s.io Mar 14 00:21:00.810754 containerd[1471]: time="2026-03-14T00:21:00.809027168Z" level=warning msg="cleaning up after shim disconnected" id=8d8587deb10d413d5d100e48c95465ef6232ebd006d5750d1acd16595a801041 namespace=k8s.io Mar 14 00:21:00.810754 containerd[1471]: time="2026-03-14T00:21:00.809045182Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:21:01.773151 kubelet[2647]: E0314 00:21:01.773099 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:01.817231 containerd[1471]: time="2026-03-14T00:21:01.812787964Z" level=info msg="CreateContainer within sandbox \"67f3662ced155564222febc371e2a5e0d1c20a3d602198d888586dc707bafa5d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:21:01.923936 containerd[1471]: time="2026-03-14T00:21:01.922398441Z" level=info msg="CreateContainer within sandbox \"67f3662ced155564222febc371e2a5e0d1c20a3d602198d888586dc707bafa5d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7cc28607c8e6004a0b86fd2665a4b4435041a6f311e31f0a2e47b85b135d5c77\"" Mar 14 00:21:01.927115 containerd[1471]: time="2026-03-14T00:21:01.925859489Z" level=info msg="StartContainer for \"7cc28607c8e6004a0b86fd2665a4b4435041a6f311e31f0a2e47b85b135d5c77\"" Mar 14 00:21:02.093141 systemd[1]: Started cri-containerd-7cc28607c8e6004a0b86fd2665a4b4435041a6f311e31f0a2e47b85b135d5c77.scope - libcontainer container 7cc28607c8e6004a0b86fd2665a4b4435041a6f311e31f0a2e47b85b135d5c77. Mar 14 00:21:02.211247 kubelet[2647]: E0314 00:21:02.210461 2647 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 00:21:02.293678 containerd[1471]: time="2026-03-14T00:21:02.291840875Z" level=info msg="StartContainer for \"7cc28607c8e6004a0b86fd2665a4b4435041a6f311e31f0a2e47b85b135d5c77\" returns successfully" Mar 14 00:21:02.319412 systemd[1]: cri-containerd-7cc28607c8e6004a0b86fd2665a4b4435041a6f311e31f0a2e47b85b135d5c77.scope: Deactivated successfully. Mar 14 00:21:02.498727 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7cc28607c8e6004a0b86fd2665a4b4435041a6f311e31f0a2e47b85b135d5c77-rootfs.mount: Deactivated successfully. Mar 14 00:21:02.549654 containerd[1471]: time="2026-03-14T00:21:02.548859511Z" level=info msg="shim disconnected" id=7cc28607c8e6004a0b86fd2665a4b4435041a6f311e31f0a2e47b85b135d5c77 namespace=k8s.io Mar 14 00:21:02.549654 containerd[1471]: time="2026-03-14T00:21:02.548936664Z" level=warning msg="cleaning up after shim disconnected" id=7cc28607c8e6004a0b86fd2665a4b4435041a6f311e31f0a2e47b85b135d5c77 namespace=k8s.io Mar 14 00:21:02.549654 containerd[1471]: time="2026-03-14T00:21:02.548956161Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:21:02.801961 kubelet[2647]: E0314 00:21:02.799261 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:02.848478 containerd[1471]: time="2026-03-14T00:21:02.848223145Z" level=info msg="CreateContainer within sandbox \"67f3662ced155564222febc371e2a5e0d1c20a3d602198d888586dc707bafa5d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:21:03.012010 containerd[1471]: time="2026-03-14T00:21:03.011418350Z" level=info msg="CreateContainer within sandbox \"67f3662ced155564222febc371e2a5e0d1c20a3d602198d888586dc707bafa5d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9d671cb1017dff29b4d29eeec30cf609e71d571360d9492fc7d771bef4e70ebd\"" Mar 14 00:21:03.030641 containerd[1471]: time="2026-03-14T00:21:03.030466263Z" level=info msg="StartContainer for \"9d671cb1017dff29b4d29eeec30cf609e71d571360d9492fc7d771bef4e70ebd\"" Mar 14 00:21:03.303926 systemd[1]: Started cri-containerd-9d671cb1017dff29b4d29eeec30cf609e71d571360d9492fc7d771bef4e70ebd.scope - libcontainer container 9d671cb1017dff29b4d29eeec30cf609e71d571360d9492fc7d771bef4e70ebd. Mar 14 00:21:03.610441 containerd[1471]: time="2026-03-14T00:21:03.610109242Z" level=info msg="StartContainer for \"9d671cb1017dff29b4d29eeec30cf609e71d571360d9492fc7d771bef4e70ebd\" returns successfully" Mar 14 00:21:03.617797 systemd[1]: cri-containerd-9d671cb1017dff29b4d29eeec30cf609e71d571360d9492fc7d771bef4e70ebd.scope: Deactivated successfully. Mar 14 00:21:03.851837 kubelet[2647]: E0314 00:21:03.850415 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:03.926213 containerd[1471]: time="2026-03-14T00:21:03.925836824Z" level=info msg="shim disconnected" id=9d671cb1017dff29b4d29eeec30cf609e71d571360d9492fc7d771bef4e70ebd namespace=k8s.io Mar 14 00:21:03.926213 containerd[1471]: time="2026-03-14T00:21:03.925968549Z" level=warning msg="cleaning up after shim disconnected" id=9d671cb1017dff29b4d29eeec30cf609e71d571360d9492fc7d771bef4e70ebd namespace=k8s.io Mar 14 00:21:03.926213 containerd[1471]: time="2026-03-14T00:21:03.925985039Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:21:03.939753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d671cb1017dff29b4d29eeec30cf609e71d571360d9492fc7d771bef4e70ebd-rootfs.mount: Deactivated successfully. Mar 14 00:21:04.890654 kubelet[2647]: E0314 00:21:04.886366 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:04.966632 containerd[1471]: time="2026-03-14T00:21:04.952024080Z" level=info msg="CreateContainer within sandbox \"67f3662ced155564222febc371e2a5e0d1c20a3d602198d888586dc707bafa5d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:21:05.111643 containerd[1471]: time="2026-03-14T00:21:05.109066474Z" level=info msg="CreateContainer within sandbox \"67f3662ced155564222febc371e2a5e0d1c20a3d602198d888586dc707bafa5d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"6356cb2e56c805ad7c664cec028a1bd4a3ca7439f481dafc49c67fa530509fc0\"" Mar 14 00:21:05.120377 containerd[1471]: time="2026-03-14T00:21:05.112067286Z" level=info msg="StartContainer for \"6356cb2e56c805ad7c664cec028a1bd4a3ca7439f481dafc49c67fa530509fc0\"" Mar 14 00:21:05.347682 systemd[1]: Started cri-containerd-6356cb2e56c805ad7c664cec028a1bd4a3ca7439f481dafc49c67fa530509fc0.scope - libcontainer container 6356cb2e56c805ad7c664cec028a1bd4a3ca7439f481dafc49c67fa530509fc0. Mar 14 00:21:05.573184 systemd[1]: cri-containerd-6356cb2e56c805ad7c664cec028a1bd4a3ca7439f481dafc49c67fa530509fc0.scope: Deactivated successfully. Mar 14 00:21:05.589115 containerd[1471]: time="2026-03-14T00:21:05.587054302Z" level=info msg="StartContainer for \"6356cb2e56c805ad7c664cec028a1bd4a3ca7439f481dafc49c67fa530509fc0\" returns successfully" Mar 14 00:21:05.682057 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6356cb2e56c805ad7c664cec028a1bd4a3ca7439f481dafc49c67fa530509fc0-rootfs.mount: Deactivated successfully. Mar 14 00:21:05.749595 containerd[1471]: time="2026-03-14T00:21:05.748920332Z" level=info msg="shim disconnected" id=6356cb2e56c805ad7c664cec028a1bd4a3ca7439f481dafc49c67fa530509fc0 namespace=k8s.io Mar 14 00:21:05.749595 containerd[1471]: time="2026-03-14T00:21:05.748993459Z" level=warning msg="cleaning up after shim disconnected" id=6356cb2e56c805ad7c664cec028a1bd4a3ca7439f481dafc49c67fa530509fc0 namespace=k8s.io Mar 14 00:21:05.749595 containerd[1471]: time="2026-03-14T00:21:05.749006483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:21:05.902694 kubelet[2647]: E0314 00:21:05.902037 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:05.942215 containerd[1471]: time="2026-03-14T00:21:05.941447676Z" level=info msg="CreateContainer within sandbox \"67f3662ced155564222febc371e2a5e0d1c20a3d602198d888586dc707bafa5d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:21:06.034015 containerd[1471]: time="2026-03-14T00:21:06.032958565Z" level=info msg="CreateContainer within sandbox \"67f3662ced155564222febc371e2a5e0d1c20a3d602198d888586dc707bafa5d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"58ec3e6b01e7b1f47506bf91c8e01c16bd3eef023716209d5a1a768fb0552502\"" Mar 14 00:21:06.047054 containerd[1471]: time="2026-03-14T00:21:06.042655736Z" level=info msg="StartContainer for \"58ec3e6b01e7b1f47506bf91c8e01c16bd3eef023716209d5a1a768fb0552502\"" Mar 14 00:21:06.186855 systemd[1]: Started cri-containerd-58ec3e6b01e7b1f47506bf91c8e01c16bd3eef023716209d5a1a768fb0552502.scope - libcontainer container 58ec3e6b01e7b1f47506bf91c8e01c16bd3eef023716209d5a1a768fb0552502. Mar 14 00:21:06.345219 containerd[1471]: time="2026-03-14T00:21:06.345084006Z" level=info msg="StartContainer for \"58ec3e6b01e7b1f47506bf91c8e01c16bd3eef023716209d5a1a768fb0552502\" returns successfully" Mar 14 00:21:06.932022 kubelet[2647]: E0314 00:21:06.931741 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:07.026146 kubelet[2647]: I0314 00:21:07.023761 2647 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/cilium-222qx" podStartSLOduration=9.023741975 podStartE2EDuration="9.023741975s" podCreationTimestamp="2026-03-14 00:20:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:21:07.015641007 +0000 UTC m=+280.318684079" watchObservedRunningTime="2026-03-14 00:21:07.023741975 +0000 UTC m=+280.326785045" Mar 14 00:21:07.041399 kubelet[2647]: I0314 00:21:07.028645 2647 setters.go:546] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T00:21:07Z","lastTransitionTime":"2026-03-14T00:21:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 14 00:21:07.976743 kubelet[2647]: E0314 00:21:07.975362 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:08.220694 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 14 00:21:10.023843 kubelet[2647]: E0314 00:21:10.023163 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:14.379396 systemd[1]: run-containerd-runc-k8s.io-58ec3e6b01e7b1f47506bf91c8e01c16bd3eef023716209d5a1a768fb0552502-runc.G8Nw5o.mount: Deactivated successfully. Mar 14 00:21:15.058158 kubelet[2647]: E0314 00:21:15.056876 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:17.202154 systemd-networkd[1393]: lxc_health: Link UP Mar 14 00:21:17.221998 systemd-networkd[1393]: lxc_health: Gained carrier Mar 14 00:21:17.389792 kubelet[2647]: E0314 00:21:17.389157 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:18.080428 kubelet[2647]: E0314 00:21:18.079760 2647 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:21:19.277694 systemd-networkd[1393]: lxc_health: Gained IPv6LL Mar 14 00:21:21.792793 systemd[1]: run-containerd-runc-k8s.io-58ec3e6b01e7b1f47506bf91c8e01c16bd3eef023716209d5a1a768fb0552502-runc.zCAYqy.mount: Deactivated successfully. Mar 14 00:21:24.165756 sshd[4906]: pam_unix(sshd:session): session closed for user core Mar 14 00:21:24.178282 systemd[1]: sshd@44-10.0.0.36:22-10.0.0.1:55960.service: Deactivated successfully. Mar 14 00:21:24.183115 systemd[1]: session-45.scope: Deactivated successfully. Mar 14 00:21:24.183558 systemd[1]: session-45.scope: Consumed 1.115s CPU time. Mar 14 00:21:24.184800 systemd-logind[1451]: Session 45 logged out. Waiting for processes to exit. Mar 14 00:21:24.186953 systemd-logind[1451]: Removed session 45.