Mar 14 00:35:50.854470 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 13 22:25:24 -00 2026 Mar 14 00:35:50.854502 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:35:50.854604 kernel: BIOS-provided physical RAM map: Mar 14 00:35:50.854615 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 14 00:35:50.854624 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 14 00:35:50.854635 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 14 00:35:50.854646 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 14 00:35:50.854656 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 14 00:35:50.854665 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 14 00:35:50.854679 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 14 00:35:50.854689 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 14 00:35:50.854698 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 14 00:35:50.854739 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 14 00:35:50.854749 kernel: NX (Execute Disable) protection: active Mar 14 00:35:50.854761 kernel: APIC: Static calls initialized Mar 14 00:35:50.854800 kernel: SMBIOS 2.8 present. Mar 14 00:35:50.854813 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 14 00:35:50.854823 kernel: Hypervisor detected: KVM Mar 14 00:35:50.854833 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 14 00:35:50.854843 kernel: kvm-clock: using sched offset of 10678133386 cycles Mar 14 00:35:50.854854 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 14 00:35:50.854867 kernel: tsc: Detected 2445.426 MHz processor Mar 14 00:35:50.854877 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 14 00:35:50.854888 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 14 00:35:50.854903 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 14 00:35:50.854913 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 14 00:35:50.854924 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 14 00:35:50.854934 kernel: Using GB pages for direct mapping Mar 14 00:35:50.854945 kernel: ACPI: Early table checksum verification disabled Mar 14 00:35:50.854955 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 14 00:35:50.854965 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:35:50.854976 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:35:50.854986 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:35:50.855000 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 14 00:35:50.855010 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:35:50.855021 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:35:50.855031 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:35:50.855041 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 14 00:35:50.855052 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 14 00:35:50.855065 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 14 00:35:50.855081 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 14 00:35:50.855095 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 14 00:35:50.855106 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 14 00:35:50.855117 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 14 00:35:50.855128 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 14 00:35:50.855139 kernel: No NUMA configuration found Mar 14 00:35:50.855150 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 14 00:35:50.856249 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 14 00:35:50.856269 kernel: Zone ranges: Mar 14 00:35:50.856280 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 14 00:35:50.856293 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 14 00:35:50.856303 kernel: Normal empty Mar 14 00:35:50.856316 kernel: Movable zone start for each node Mar 14 00:35:50.856327 kernel: Early memory node ranges Mar 14 00:35:50.856339 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 14 00:35:50.856351 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 14 00:35:50.856362 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 14 00:35:50.856380 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 14 00:35:50.856433 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 14 00:35:50.856447 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 14 00:35:50.856457 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 14 00:35:50.856469 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 14 00:35:50.856481 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 14 00:35:50.856493 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 14 00:35:50.856503 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 14 00:35:50.856587 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 14 00:35:50.856608 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 14 00:35:50.856621 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 14 00:35:50.856632 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 14 00:35:50.856645 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 14 00:35:50.856655 kernel: TSC deadline timer available Mar 14 00:35:50.856667 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 14 00:35:50.856678 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 14 00:35:50.856690 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 14 00:35:50.856739 kernel: kvm-guest: setup PV sched yield Mar 14 00:35:50.856760 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 14 00:35:50.856772 kernel: Booting paravirtualized kernel on KVM Mar 14 00:35:50.856784 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 14 00:35:50.856795 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 14 00:35:50.856807 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 14 00:35:50.856818 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 14 00:35:50.856830 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 14 00:35:50.856841 kernel: kvm-guest: PV spinlocks enabled Mar 14 00:35:50.856851 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 14 00:35:50.856867 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:35:50.856878 kernel: random: crng init done Mar 14 00:35:50.856888 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 14 00:35:50.856898 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 14 00:35:50.856908 kernel: Fallback order for Node 0: 0 Mar 14 00:35:50.856919 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 14 00:35:50.856929 kernel: Policy zone: DMA32 Mar 14 00:35:50.856938 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 14 00:35:50.856953 kernel: Memory: 2434604K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136888K reserved, 0K cma-reserved) Mar 14 00:35:50.856963 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 14 00:35:50.856974 kernel: ftrace: allocating 37996 entries in 149 pages Mar 14 00:35:50.856983 kernel: ftrace: allocated 149 pages with 4 groups Mar 14 00:35:50.856993 kernel: Dynamic Preempt: voluntary Mar 14 00:35:50.857003 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 14 00:35:50.857014 kernel: rcu: RCU event tracing is enabled. Mar 14 00:35:50.857025 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 14 00:35:50.857035 kernel: Trampoline variant of Tasks RCU enabled. Mar 14 00:35:50.857049 kernel: Rude variant of Tasks RCU enabled. Mar 14 00:35:50.857062 kernel: Tracing variant of Tasks RCU enabled. Mar 14 00:35:50.857073 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 14 00:35:50.857084 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 14 00:35:50.857134 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 14 00:35:50.857147 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 14 00:35:50.857157 kernel: Console: colour VGA+ 80x25 Mar 14 00:35:50.860309 kernel: printk: console [ttyS0] enabled Mar 14 00:35:50.860324 kernel: ACPI: Core revision 20230628 Mar 14 00:35:50.860342 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 14 00:35:50.860353 kernel: APIC: Switch to symmetric I/O mode setup Mar 14 00:35:50.860363 kernel: x2apic enabled Mar 14 00:35:50.860374 kernel: APIC: Switched APIC routing to: physical x2apic Mar 14 00:35:50.860385 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 14 00:35:50.860396 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 14 00:35:50.860409 kernel: kvm-guest: setup PV IPIs Mar 14 00:35:50.860420 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 14 00:35:50.860446 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 14 00:35:50.860458 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 14 00:35:50.860469 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 14 00:35:50.860480 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 14 00:35:50.860495 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 14 00:35:50.860506 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 14 00:35:50.860622 kernel: Spectre V2 : Mitigation: Retpolines Mar 14 00:35:50.860636 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 14 00:35:50.860648 kernel: Speculative Store Bypass: Vulnerable Mar 14 00:35:50.860664 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 14 00:35:50.860710 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 14 00:35:50.860722 kernel: active return thunk: srso_alias_return_thunk Mar 14 00:35:50.860733 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 14 00:35:50.860745 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 14 00:35:50.860756 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 14 00:35:50.860767 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 14 00:35:50.860778 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 14 00:35:50.860794 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 14 00:35:50.860805 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 14 00:35:50.860817 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 14 00:35:50.860828 kernel: Freeing SMP alternatives memory: 32K Mar 14 00:35:50.860839 kernel: pid_max: default: 32768 minimum: 301 Mar 14 00:35:50.860851 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 14 00:35:50.860862 kernel: landlock: Up and running. Mar 14 00:35:50.860873 kernel: SELinux: Initializing. Mar 14 00:35:50.860884 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:35:50.860899 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 14 00:35:50.860910 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 14 00:35:50.860922 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:35:50.860933 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:35:50.860944 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 14 00:35:50.860960 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 14 00:35:50.860972 kernel: signal: max sigframe size: 1776 Mar 14 00:35:50.861009 kernel: rcu: Hierarchical SRCU implementation. Mar 14 00:35:50.861022 kernel: rcu: Max phase no-delay instances is 400. Mar 14 00:35:50.861037 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 14 00:35:50.861048 kernel: smp: Bringing up secondary CPUs ... Mar 14 00:35:50.861059 kernel: smpboot: x86: Booting SMP configuration: Mar 14 00:35:50.861071 kernel: .... node #0, CPUs: #1 #2 #3 Mar 14 00:35:50.861082 kernel: smp: Brought up 1 node, 4 CPUs Mar 14 00:35:50.861093 kernel: smpboot: Max logical packages: 1 Mar 14 00:35:50.861104 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 14 00:35:50.861116 kernel: devtmpfs: initialized Mar 14 00:35:50.861127 kernel: x86/mm: Memory block size: 128MB Mar 14 00:35:50.861142 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 14 00:35:50.861153 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 14 00:35:50.861210 kernel: pinctrl core: initialized pinctrl subsystem Mar 14 00:35:50.861224 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 14 00:35:50.861235 kernel: audit: initializing netlink subsys (disabled) Mar 14 00:35:50.861247 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 14 00:35:50.861260 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 14 00:35:50.861271 kernel: audit: type=2000 audit(1773448546.524:1): state=initialized audit_enabled=0 res=1 Mar 14 00:35:50.861284 kernel: cpuidle: using governor menu Mar 14 00:35:50.861302 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 14 00:35:50.861313 kernel: dca service started, version 1.12.1 Mar 14 00:35:50.861324 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 14 00:35:50.861334 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 14 00:35:50.861344 kernel: PCI: Using configuration type 1 for base access Mar 14 00:35:50.861354 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 14 00:35:50.861365 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 14 00:35:50.861376 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 14 00:35:50.861387 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 14 00:35:50.861403 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 14 00:35:50.861414 kernel: ACPI: Added _OSI(Module Device) Mar 14 00:35:50.861425 kernel: ACPI: Added _OSI(Processor Device) Mar 14 00:35:50.861436 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 14 00:35:50.861447 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 14 00:35:50.861458 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 14 00:35:50.861469 kernel: ACPI: Interpreter enabled Mar 14 00:35:50.861480 kernel: ACPI: PM: (supports S0 S3 S5) Mar 14 00:35:50.861492 kernel: ACPI: Using IOAPIC for interrupt routing Mar 14 00:35:50.861508 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 14 00:35:50.861612 kernel: PCI: Using E820 reservations for host bridge windows Mar 14 00:35:50.861625 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 14 00:35:50.861637 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 14 00:35:50.862125 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 14 00:35:50.862409 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 14 00:35:50.862705 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 14 00:35:50.862735 kernel: PCI host bridge to bus 0000:00 Mar 14 00:35:50.864358 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 14 00:35:50.864669 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 14 00:35:50.864896 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 14 00:35:50.865116 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 14 00:35:50.865827 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 14 00:35:50.866039 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 14 00:35:50.868379 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 14 00:35:50.868943 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 14 00:35:50.869343 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 14 00:35:50.869661 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 14 00:35:50.869894 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 14 00:35:50.870121 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 14 00:35:50.872508 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 14 00:35:50.872969 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 14 00:35:50.873416 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 14 00:35:50.873753 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 14 00:35:50.876693 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 14 00:35:50.877763 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 14 00:35:50.878147 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 14 00:35:50.878494 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 14 00:35:50.878855 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 14 00:35:50.879160 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 14 00:35:50.879416 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 14 00:35:50.879708 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 14 00:35:50.879905 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 14 00:35:50.880091 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 14 00:35:50.881038 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 14 00:35:50.881463 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 14 00:35:50.881977 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 14 00:35:50.884503 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 14 00:35:50.884849 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 14 00:35:50.886053 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 14 00:35:50.886477 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 14 00:35:50.886509 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 14 00:35:50.886600 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 14 00:35:50.886612 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 14 00:35:50.886626 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 14 00:35:50.886636 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 14 00:35:50.886648 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 14 00:35:50.886658 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 14 00:35:50.886669 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 14 00:35:50.886680 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 14 00:35:50.886698 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 14 00:35:50.886709 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 14 00:35:50.886721 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 14 00:35:50.886733 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 14 00:35:50.886744 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 14 00:35:50.886755 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 14 00:35:50.886765 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 14 00:35:50.886777 kernel: iommu: Default domain type: Translated Mar 14 00:35:50.886788 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 14 00:35:50.886804 kernel: PCI: Using ACPI for IRQ routing Mar 14 00:35:50.886817 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 14 00:35:50.886828 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 14 00:35:50.886840 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 14 00:35:50.887114 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 14 00:35:50.889061 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 14 00:35:50.889344 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 14 00:35:50.889367 kernel: vgaarb: loaded Mar 14 00:35:50.889387 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 14 00:35:50.889399 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 14 00:35:50.889411 kernel: clocksource: Switched to clocksource kvm-clock Mar 14 00:35:50.889422 kernel: VFS: Disk quotas dquot_6.6.0 Mar 14 00:35:50.889434 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 14 00:35:50.889446 kernel: pnp: PnP ACPI init Mar 14 00:35:50.890074 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 14 00:35:50.890094 kernel: pnp: PnP ACPI: found 6 devices Mar 14 00:35:50.890112 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 14 00:35:50.890123 kernel: NET: Registered PF_INET protocol family Mar 14 00:35:50.890133 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 14 00:35:50.890145 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 14 00:35:50.891390 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 14 00:35:50.891414 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 14 00:35:50.891426 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 14 00:35:50.891436 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 14 00:35:50.891447 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:35:50.891465 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 14 00:35:50.891475 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 14 00:35:50.891486 kernel: NET: Registered PF_XDP protocol family Mar 14 00:35:50.891755 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 14 00:35:50.892015 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 14 00:35:50.892787 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 14 00:35:50.893067 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 14 00:35:50.894368 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 14 00:35:50.894667 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 14 00:35:50.894689 kernel: PCI: CLS 0 bytes, default 64 Mar 14 00:35:50.894701 kernel: Initialise system trusted keyrings Mar 14 00:35:50.894714 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 14 00:35:50.894725 kernel: Key type asymmetric registered Mar 14 00:35:50.894737 kernel: Asymmetric key parser 'x509' registered Mar 14 00:35:50.894749 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 14 00:35:50.894759 kernel: io scheduler mq-deadline registered Mar 14 00:35:50.894770 kernel: io scheduler kyber registered Mar 14 00:35:50.894781 kernel: io scheduler bfq registered Mar 14 00:35:50.894801 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 14 00:35:50.894848 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 14 00:35:50.894862 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 14 00:35:50.894874 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 14 00:35:50.894884 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 14 00:35:50.894895 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 14 00:35:50.894906 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 14 00:35:50.894917 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 14 00:35:50.894928 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 14 00:35:50.895283 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 14 00:35:50.895478 kernel: rtc_cmos 00:04: registered as rtc0 Mar 14 00:35:50.895493 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 14 00:35:50.895757 kernel: rtc_cmos 00:04: setting system clock to 2026-03-14T00:35:49 UTC (1773448549) Mar 14 00:35:50.895941 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 14 00:35:50.895956 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 14 00:35:50.895967 kernel: NET: Registered PF_INET6 protocol family Mar 14 00:35:50.895985 kernel: Segment Routing with IPv6 Mar 14 00:35:50.895996 kernel: In-situ OAM (IOAM) with IPv6 Mar 14 00:35:50.896007 kernel: NET: Registered PF_PACKET protocol family Mar 14 00:35:50.896018 kernel: Key type dns_resolver registered Mar 14 00:35:50.896029 kernel: IPI shorthand broadcast: enabled Mar 14 00:35:50.896040 kernel: sched_clock: Marking stable (3007045504, 517841163)->(4134742351, -609855684) Mar 14 00:35:50.896051 kernel: registered taskstats version 1 Mar 14 00:35:50.896062 kernel: Loading compiled-in X.509 certificates Mar 14 00:35:50.896073 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: a10808ddb7a43f470807cfbbb5be2c08229c2dec' Mar 14 00:35:50.896088 kernel: Key type .fscrypt registered Mar 14 00:35:50.896099 kernel: Key type fscrypt-provisioning registered Mar 14 00:35:50.896110 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 14 00:35:50.896121 kernel: ima: Allocated hash algorithm: sha1 Mar 14 00:35:50.896132 kernel: ima: No architecture policies found Mar 14 00:35:50.896143 kernel: clk: Disabling unused clocks Mar 14 00:35:50.896154 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 14 00:35:50.898248 kernel: Write protecting the kernel read-only data: 36864k Mar 14 00:35:50.898263 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 14 00:35:50.898284 kernel: Run /init as init process Mar 14 00:35:50.898299 kernel: with arguments: Mar 14 00:35:50.898313 kernel: /init Mar 14 00:35:50.898324 kernel: with environment: Mar 14 00:35:50.898336 kernel: HOME=/ Mar 14 00:35:50.898349 kernel: TERM=linux Mar 14 00:35:50.898362 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:35:50.898377 systemd[1]: Detected virtualization kvm. Mar 14 00:35:50.898456 systemd[1]: Detected architecture x86-64. Mar 14 00:35:50.898473 systemd[1]: Running in initrd. Mar 14 00:35:50.898487 systemd[1]: No hostname configured, using default hostname. Mar 14 00:35:50.898497 systemd[1]: Hostname set to . Mar 14 00:35:50.898511 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:35:50.898667 systemd[1]: Queued start job for default target initrd.target. Mar 14 00:35:50.898680 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:35:50.898691 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:35:50.898710 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 14 00:35:50.898721 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:35:50.898734 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 14 00:35:50.898749 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 14 00:35:50.898763 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 14 00:35:50.898777 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 14 00:35:50.898790 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:35:50.898810 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:35:50.898822 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:35:50.898835 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:35:50.898849 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:35:50.898888 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:35:50.898904 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:35:50.898926 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:35:50.898938 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 14 00:35:50.898952 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 14 00:35:50.898964 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:35:50.898976 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:35:50.898987 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:35:50.898999 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:35:50.899011 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 14 00:35:50.899023 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:35:50.899039 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 14 00:35:50.899052 systemd[1]: Starting systemd-fsck-usr.service... Mar 14 00:35:50.899065 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:35:50.899078 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:35:50.899090 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:35:50.899140 systemd-journald[195]: Collecting audit messages is disabled. Mar 14 00:35:50.900264 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 14 00:35:50.900280 systemd-journald[195]: Journal started Mar 14 00:35:50.900305 systemd-journald[195]: Runtime Journal (/run/log/journal/1d6f32e043594f26b5f110431445dd93) is 6.0M, max 48.4M, 42.3M free. Mar 14 00:35:50.888867 systemd-modules-load[196]: Inserted module 'overlay' Mar 14 00:35:50.916912 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:35:50.935892 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:35:50.939829 systemd[1]: Finished systemd-fsck-usr.service. Mar 14 00:35:51.003966 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 14 00:35:51.376382 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 14 00:35:51.376437 kernel: Bridge firewalling registered Mar 14 00:35:51.059401 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 14 00:35:51.415045 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:35:51.433670 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:35:51.459110 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:35:51.465872 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 14 00:35:51.514931 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:35:51.525025 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:35:51.539758 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:35:51.551872 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:35:51.601022 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:35:51.608817 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:35:51.634640 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:35:51.670764 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 14 00:35:51.682346 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:35:51.788303 dracut-cmdline[230]: dracut-dracut-053 Mar 14 00:35:51.799329 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=06bcfe4e320f7b61768d05159b69b4eeccebc9d161fb2cdaf8d6998ab1e14ac7 Mar 14 00:35:51.838821 systemd-resolved[231]: Positive Trust Anchors: Mar 14 00:35:51.838874 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:35:51.838920 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:35:51.843002 systemd-resolved[231]: Defaulting to hostname 'linux'. Mar 14 00:35:51.846033 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:35:51.849685 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:35:52.076270 kernel: SCSI subsystem initialized Mar 14 00:35:52.088963 kernel: Loading iSCSI transport class v2.0-870. Mar 14 00:35:52.117863 kernel: iscsi: registered transport (tcp) Mar 14 00:35:52.163301 kernel: iscsi: registered transport (qla4xxx) Mar 14 00:35:52.163459 kernel: QLogic iSCSI HBA Driver Mar 14 00:35:52.530400 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 14 00:35:52.587160 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 14 00:35:52.685091 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 14 00:35:52.685216 kernel: device-mapper: uevent: version 1.0.3 Mar 14 00:35:52.692712 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 14 00:35:52.832161 kernel: raid6: avx2x4 gen() 15719 MB/s Mar 14 00:35:52.855463 kernel: raid6: avx2x2 gen() 15125 MB/s Mar 14 00:35:52.875695 kernel: raid6: avx2x1 gen() 8890 MB/s Mar 14 00:35:52.875804 kernel: raid6: using algorithm avx2x4 gen() 15719 MB/s Mar 14 00:35:52.900741 kernel: raid6: .... xor() 3191 MB/s, rmw enabled Mar 14 00:35:52.900828 kernel: raid6: using avx2x2 recovery algorithm Mar 14 00:35:52.952892 kernel: xor: automatically using best checksumming function avx Mar 14 00:35:53.531712 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 14 00:35:53.569374 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:35:53.607093 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:35:53.650943 systemd-udevd[414]: Using default interface naming scheme 'v255'. Mar 14 00:35:53.670833 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:35:53.710846 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 14 00:35:53.763315 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Mar 14 00:35:53.881684 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:35:53.909816 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:35:54.122781 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:35:54.193655 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 14 00:35:54.318835 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 14 00:35:54.343995 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:35:54.379259 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:35:54.391402 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:35:54.439698 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 14 00:35:54.438434 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 14 00:35:54.459286 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:35:54.470921 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:35:54.534303 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 14 00:35:54.534657 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 14 00:35:54.534684 kernel: GPT:9289727 != 19775487 Mar 14 00:35:54.534705 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 14 00:35:54.534721 kernel: GPT:9289727 != 19775487 Mar 14 00:35:54.534737 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 14 00:35:54.534753 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:35:54.552309 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:35:54.564234 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:35:54.714171 kernel: cryptd: max_cpu_qlen set to 1000 Mar 14 00:35:54.714263 kernel: libata version 3.00 loaded. Mar 14 00:35:54.564477 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:35:54.745882 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (474) Mar 14 00:35:54.573336 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:35:54.805765 kernel: BTRFS: device fsid cd4a88d6-c21b-44c8-aac6-68c13cee1def devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (462) Mar 14 00:35:54.658810 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:35:54.668423 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:35:54.737954 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 14 00:35:54.888642 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 14 00:35:54.937918 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 00:35:54.956641 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 14 00:35:55.370009 kernel: ahci 0000:00:1f.2: version 3.0 Mar 14 00:35:55.370459 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 14 00:35:55.370485 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 14 00:35:55.371414 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 14 00:35:55.375395 kernel: scsi host0: ahci Mar 14 00:35:55.375775 kernel: scsi host1: ahci Mar 14 00:35:55.376029 kernel: scsi host2: ahci Mar 14 00:35:55.376358 kernel: scsi host3: ahci Mar 14 00:35:55.376768 kernel: scsi host4: ahci Mar 14 00:35:55.377048 kernel: scsi host5: ahci Mar 14 00:35:55.377382 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 14 00:35:55.377404 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 14 00:35:55.377423 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 14 00:35:55.377441 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 14 00:35:55.377459 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 14 00:35:55.377476 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 14 00:35:55.377493 kernel: AVX2 version of gcm_enc/dec engaged. Mar 14 00:35:55.377612 kernel: AES CTR mode by8 optimization enabled Mar 14 00:35:55.377637 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 14 00:35:55.377654 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 14 00:35:55.389717 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 14 00:35:55.389761 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 14 00:35:55.400372 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 14 00:35:55.400728 kernel: ata3.00: applying bridge limits Mar 14 00:35:55.407659 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 14 00:35:55.412272 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 14 00:35:55.417859 kernel: ata3.00: configured for UDMA/100 Mar 14 00:35:55.421331 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 14 00:35:55.449084 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 14 00:35:55.432484 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:35:55.471156 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 14 00:35:55.485468 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 14 00:35:55.507454 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:35:55.507491 disk-uuid[556]: Primary Header is updated. Mar 14 00:35:55.507491 disk-uuid[556]: Secondary Entries is updated. Mar 14 00:35:55.507491 disk-uuid[556]: Secondary Header is updated. Mar 14 00:35:55.524836 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:35:55.531654 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:35:55.531703 kernel: block device autoloading is deprecated and will be removed. Mar 14 00:35:55.554943 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 14 00:35:55.555344 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 14 00:35:55.567621 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 14 00:35:55.569097 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:35:56.526627 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 14 00:35:56.528481 disk-uuid[562]: The operation has completed successfully. Mar 14 00:35:56.572311 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 14 00:35:56.572634 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 14 00:35:56.604819 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 14 00:35:56.615440 sh[597]: Success Mar 14 00:35:56.637645 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 14 00:35:56.693613 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 14 00:35:56.708694 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 14 00:35:56.714427 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 14 00:35:56.738935 kernel: BTRFS info (device dm-0): first mount of filesystem cd4a88d6-c21b-44c8-aac6-68c13cee1def Mar 14 00:35:56.738998 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:35:56.739011 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 14 00:35:56.745285 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 14 00:35:56.745350 kernel: BTRFS info (device dm-0): using free space tree Mar 14 00:35:56.759925 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 14 00:35:56.764827 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 14 00:35:56.781868 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 14 00:35:56.786604 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 14 00:35:56.820303 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:35:56.820363 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:35:56.820381 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:35:56.832636 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:35:56.849041 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 14 00:35:56.858040 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:35:56.867034 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 14 00:35:56.882974 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 14 00:35:56.963081 ignition[701]: Ignition 2.19.0 Mar 14 00:35:56.963617 ignition[701]: Stage: fetch-offline Mar 14 00:35:56.965906 ignition[701]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:35:56.965928 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:35:56.976292 ignition[701]: parsed url from cmdline: "" Mar 14 00:35:56.976338 ignition[701]: no config URL provided Mar 14 00:35:56.976349 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Mar 14 00:35:56.976366 ignition[701]: no config at "/usr/lib/ignition/user.ign" Mar 14 00:35:56.976410 ignition[701]: op(1): [started] loading QEMU firmware config module Mar 14 00:35:56.976419 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 14 00:35:57.001669 ignition[701]: op(1): [finished] loading QEMU firmware config module Mar 14 00:35:57.054479 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:35:57.083488 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:35:57.322791 ignition[701]: parsing config with SHA512: 1c4fedd26b7b1cd2e300d1435add0b9c3c690a34a811489525b79a9a05af3cec711e20901eef2736aba273cc849652272a4f975acf4fd08bd7aeb1798664e1ec Mar 14 00:35:57.333402 unknown[701]: fetched base config from "system" Mar 14 00:35:57.334066 ignition[701]: fetch-offline: fetch-offline passed Mar 14 00:35:57.333471 unknown[701]: fetched user config from "qemu" Mar 14 00:35:57.334241 ignition[701]: Ignition finished successfully Mar 14 00:35:57.343093 systemd-networkd[785]: lo: Link UP Mar 14 00:35:57.343103 systemd-networkd[785]: lo: Gained carrier Mar 14 00:35:57.346082 systemd-networkd[785]: Enumeration completed Mar 14 00:35:57.347773 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:35:57.347781 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:35:57.350001 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:35:57.353968 systemd-networkd[785]: eth0: Link UP Mar 14 00:35:57.353974 systemd-networkd[785]: eth0: Gained carrier Mar 14 00:35:57.353990 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:35:57.383963 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:35:57.394945 systemd[1]: Reached target network.target - Network. Mar 14 00:35:57.416763 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 14 00:35:57.437715 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 14 00:35:57.448005 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 14 00:35:57.494456 ignition[789]: Ignition 2.19.0 Mar 14 00:35:57.494597 ignition[789]: Stage: kargs Mar 14 00:35:57.494869 ignition[789]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:35:57.494890 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:35:57.496333 ignition[789]: kargs: kargs passed Mar 14 00:35:57.496413 ignition[789]: Ignition finished successfully Mar 14 00:35:57.518165 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 14 00:35:57.548021 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 14 00:35:57.572415 ignition[797]: Ignition 2.19.0 Mar 14 00:35:57.572463 ignition[797]: Stage: disks Mar 14 00:35:57.572763 ignition[797]: no configs at "/usr/lib/ignition/base.d" Mar 14 00:35:57.572781 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:35:57.573993 ignition[797]: disks: disks passed Mar 14 00:35:57.574055 ignition[797]: Ignition finished successfully Mar 14 00:35:57.595611 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 14 00:35:57.600785 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 14 00:35:57.600974 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 14 00:35:57.604353 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:35:57.606446 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:35:57.610388 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:35:57.649965 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 14 00:35:57.674223 systemd-fsck[807]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 14 00:35:57.680988 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 14 00:35:57.705743 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 14 00:35:57.886682 kernel: EXT4-fs (vda9): mounted filesystem 08e1a4ba-bbe3-4d29-aaf8-5eb22e9a9bf3 r/w with ordered data mode. Quota mode: none. Mar 14 00:35:57.888693 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 14 00:35:57.898663 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 14 00:35:57.927028 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:35:57.938887 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 14 00:35:57.972016 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (815) Mar 14 00:35:57.972064 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:35:57.972085 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:35:57.972105 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:35:57.943773 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 14 00:35:57.991911 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:35:57.943847 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 14 00:35:57.943891 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:35:57.973694 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 14 00:35:57.994167 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:35:58.028982 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 14 00:35:58.102901 initrd-setup-root[839]: cut: /sysroot/etc/passwd: No such file or directory Mar 14 00:35:58.111166 initrd-setup-root[846]: cut: /sysroot/etc/group: No such file or directory Mar 14 00:35:58.123799 initrd-setup-root[853]: cut: /sysroot/etc/shadow: No such file or directory Mar 14 00:35:58.130165 initrd-setup-root[860]: cut: /sysroot/etc/gshadow: No such file or directory Mar 14 00:35:58.295656 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 14 00:35:58.311718 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 14 00:35:58.319728 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 14 00:35:58.335010 kernel: BTRFS info (device vda6): last unmount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:35:58.326810 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 14 00:35:58.358398 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 14 00:35:58.379332 ignition[927]: INFO : Ignition 2.19.0 Mar 14 00:35:58.379332 ignition[927]: INFO : Stage: mount Mar 14 00:35:58.387127 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:35:58.387127 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:35:58.387127 ignition[927]: INFO : mount: mount passed Mar 14 00:35:58.387127 ignition[927]: INFO : Ignition finished successfully Mar 14 00:35:58.405099 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 14 00:35:58.434899 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 14 00:35:58.795028 systemd-networkd[785]: eth0: Gained IPv6LL Mar 14 00:35:58.905920 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 14 00:35:58.939985 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Mar 14 00:35:58.940399 kernel: BTRFS info (device vda6): first mount of filesystem 0ec14b75-fea9-4657-9245-934c6406ae1a Mar 14 00:35:58.951150 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 14 00:35:58.951280 kernel: BTRFS info (device vda6): using free space tree Mar 14 00:35:58.970691 kernel: BTRFS info (device vda6): auto enabling async discard Mar 14 00:35:58.976803 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 14 00:35:59.026927 ignition[958]: INFO : Ignition 2.19.0 Mar 14 00:35:59.026927 ignition[958]: INFO : Stage: files Mar 14 00:35:59.034947 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:35:59.034947 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:35:59.034947 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Mar 14 00:35:59.058402 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 14 00:35:59.058402 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 14 00:35:59.058402 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 14 00:35:59.058402 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 14 00:35:59.058402 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 14 00:35:59.058402 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:35:59.058402 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 14 00:35:59.046585 unknown[958]: wrote ssh authorized keys file for user: core Mar 14 00:35:59.135596 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 14 00:35:59.240734 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 14 00:35:59.240734 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:35:59.261754 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 14 00:35:59.404903 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 14 00:35:59.772335 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 14 00:35:59.772335 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 14 00:35:59.787677 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 14 00:35:59.795945 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:35:59.804359 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 14 00:35:59.815402 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:35:59.824749 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 14 00:35:59.824749 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:35:59.844290 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 14 00:35:59.851456 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:35:59.860876 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 14 00:35:59.868309 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:35:59.881744 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:35:59.893300 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:35:59.904273 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 14 00:36:00.213876 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 14 00:36:00.948720 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 14 00:36:00.948720 ignition[958]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 14 00:36:00.967396 ignition[958]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:36:00.976869 ignition[958]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 14 00:36:00.976869 ignition[958]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 14 00:36:00.992413 ignition[958]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 14 00:36:00.992413 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 14 00:36:01.010182 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 14 00:36:01.010182 ignition[958]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 14 00:36:01.010182 ignition[958]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 14 00:36:01.100437 ignition[958]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 14 00:36:01.112846 ignition[958]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 14 00:36:01.124597 ignition[958]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 14 00:36:01.124597 ignition[958]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 14 00:36:01.124597 ignition[958]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 14 00:36:01.124597 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:36:01.124597 ignition[958]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 14 00:36:01.124597 ignition[958]: INFO : files: files passed Mar 14 00:36:01.124597 ignition[958]: INFO : Ignition finished successfully Mar 14 00:36:01.172066 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 14 00:36:01.188476 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 14 00:36:01.202105 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 14 00:36:01.203281 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 14 00:36:01.203631 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 14 00:36:01.262685 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Mar 14 00:36:01.274825 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:36:01.284157 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:36:01.284157 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 14 00:36:01.299906 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:36:01.307821 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 14 00:36:01.338058 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 14 00:36:01.391420 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 14 00:36:01.391720 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 14 00:36:01.405510 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 14 00:36:01.433376 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 14 00:36:01.433870 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 14 00:36:01.470333 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 14 00:36:01.503994 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:36:01.517928 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 14 00:36:01.554893 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:36:01.555350 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:36:01.570734 systemd[1]: Stopped target timers.target - Timer Units. Mar 14 00:36:01.593878 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 14 00:36:01.594179 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 14 00:36:01.601265 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 14 00:36:01.608284 systemd[1]: Stopped target basic.target - Basic System. Mar 14 00:36:01.616984 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 14 00:36:01.636162 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 14 00:36:01.644649 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 14 00:36:01.650976 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 14 00:36:01.659775 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 14 00:36:01.679713 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 14 00:36:01.691320 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 14 00:36:01.696355 systemd[1]: Stopped target swap.target - Swaps. Mar 14 00:36:01.713760 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 14 00:36:01.714097 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 14 00:36:01.732266 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:36:01.751321 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:36:01.758489 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 14 00:36:01.760332 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:36:01.770923 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 14 00:36:01.771905 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 14 00:36:01.789994 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 14 00:36:01.790286 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 14 00:36:01.802024 systemd[1]: Stopped target paths.target - Path Units. Mar 14 00:36:01.822659 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 14 00:36:01.832985 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:36:01.839889 systemd[1]: Stopped target slices.target - Slice Units. Mar 14 00:36:01.854276 systemd[1]: Stopped target sockets.target - Socket Units. Mar 14 00:36:01.862628 systemd[1]: iscsid.socket: Deactivated successfully. Mar 14 00:36:01.862808 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 14 00:36:01.867076 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 14 00:36:01.867327 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 14 00:36:01.874342 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 14 00:36:01.874572 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 14 00:36:01.885889 systemd[1]: ignition-files.service: Deactivated successfully. Mar 14 00:36:01.886156 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 14 00:36:01.924011 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 14 00:36:01.935802 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 14 00:36:01.981965 ignition[1013]: INFO : Ignition 2.19.0 Mar 14 00:36:01.981965 ignition[1013]: INFO : Stage: umount Mar 14 00:36:01.981965 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 14 00:36:01.981965 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 14 00:36:01.981965 ignition[1013]: INFO : umount: umount passed Mar 14 00:36:01.981965 ignition[1013]: INFO : Ignition finished successfully Mar 14 00:36:01.936030 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:36:01.939459 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 14 00:36:01.944408 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 14 00:36:01.944811 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:36:01.945664 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 14 00:36:01.946356 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 14 00:36:01.962862 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 14 00:36:01.963024 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 14 00:36:01.972474 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 14 00:36:01.972818 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 14 00:36:01.983489 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 14 00:36:01.987871 systemd[1]: Stopped target network.target - Network. Mar 14 00:36:01.994335 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 14 00:36:01.994446 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 14 00:36:02.004376 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 14 00:36:02.004456 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 14 00:36:02.015817 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 14 00:36:02.015901 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 14 00:36:02.032049 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 14 00:36:02.032146 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 14 00:36:02.032767 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 14 00:36:02.037756 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 14 00:36:02.038961 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 14 00:36:02.039399 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 14 00:36:02.046897 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 14 00:36:02.047041 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 14 00:36:02.059754 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 14 00:36:02.059986 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 14 00:36:02.075736 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 14 00:36:02.075835 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:36:02.088762 systemd-networkd[785]: eth0: DHCPv6 lease lost Mar 14 00:36:02.095849 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 14 00:36:02.096108 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 14 00:36:02.105301 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 14 00:36:02.105375 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:36:02.138811 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 14 00:36:02.416015 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 14 00:36:02.144062 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 14 00:36:02.144142 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 14 00:36:02.151349 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:36:02.151512 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:36:02.163306 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 14 00:36:02.163375 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 14 00:36:02.169194 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:36:02.201775 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 14 00:36:02.202041 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 14 00:36:02.211728 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 14 00:36:02.212086 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:36:02.223844 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 14 00:36:02.223942 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 14 00:36:02.234260 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 14 00:36:02.234355 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:36:02.245986 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 14 00:36:02.246088 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 14 00:36:02.248371 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 14 00:36:02.248446 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 14 00:36:02.255734 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 14 00:36:02.255814 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 14 00:36:02.277142 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 14 00:36:02.289426 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 14 00:36:02.289642 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:36:02.297055 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 14 00:36:02.297138 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:36:02.309469 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 14 00:36:02.309700 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 14 00:36:02.320646 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 14 00:36:02.353927 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 14 00:36:02.369909 systemd[1]: Switching root. Mar 14 00:36:02.577009 systemd-journald[195]: Journal stopped Mar 14 00:36:06.069441 kernel: SELinux: policy capability network_peer_controls=1 Mar 14 00:36:06.070623 kernel: SELinux: policy capability open_perms=1 Mar 14 00:36:06.070702 kernel: SELinux: policy capability extended_socket_class=1 Mar 14 00:36:06.070733 kernel: SELinux: policy capability always_check_network=0 Mar 14 00:36:06.070768 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 14 00:36:06.070787 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 14 00:36:06.070805 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 14 00:36:06.070822 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 14 00:36:06.070839 kernel: audit: type=1403 audit(1773448562.678:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 14 00:36:06.070861 systemd[1]: Successfully loaded SELinux policy in 74.022ms. Mar 14 00:36:06.070901 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.675ms. Mar 14 00:36:06.070922 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 14 00:36:06.070941 systemd[1]: Detected virtualization kvm. Mar 14 00:36:06.070966 systemd[1]: Detected architecture x86-64. Mar 14 00:36:06.070985 systemd[1]: Detected first boot. Mar 14 00:36:06.071003 systemd[1]: Initializing machine ID from VM UUID. Mar 14 00:36:06.071022 zram_generator::config[1056]: No configuration found. Mar 14 00:36:06.071042 systemd[1]: Populated /etc with preset unit settings. Mar 14 00:36:06.071061 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 14 00:36:06.071079 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 14 00:36:06.071098 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 14 00:36:06.071124 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 14 00:36:06.071144 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 14 00:36:06.071163 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 14 00:36:06.071186 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 14 00:36:06.071203 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 14 00:36:06.073341 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 14 00:36:06.073377 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 14 00:36:06.073396 systemd[1]: Created slice user.slice - User and Session Slice. Mar 14 00:36:06.073422 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 14 00:36:06.073442 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 14 00:36:06.073462 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 14 00:36:06.073481 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 14 00:36:06.073500 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 14 00:36:06.073612 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 14 00:36:06.073636 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 14 00:36:06.073655 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 14 00:36:06.073724 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 14 00:36:06.073741 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 14 00:36:06.073766 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 14 00:36:06.073784 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 14 00:36:06.073801 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 14 00:36:06.073819 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 14 00:36:06.073836 systemd[1]: Reached target slices.target - Slice Units. Mar 14 00:36:06.073853 systemd[1]: Reached target swap.target - Swaps. Mar 14 00:36:06.073874 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 14 00:36:06.073896 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 14 00:36:06.073914 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 14 00:36:06.073932 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 14 00:36:06.073951 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 14 00:36:06.073970 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 14 00:36:06.073988 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 14 00:36:06.074005 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 14 00:36:06.074024 systemd[1]: Mounting media.mount - External Media Directory... Mar 14 00:36:06.074042 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:36:06.074065 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 14 00:36:06.074083 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 14 00:36:06.074100 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 14 00:36:06.074118 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 14 00:36:06.074137 systemd[1]: Reached target machines.target - Containers. Mar 14 00:36:06.074154 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 14 00:36:06.074172 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:36:06.074200 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 14 00:36:06.074219 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 14 00:36:06.074316 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:36:06.074334 kernel: hrtimer: interrupt took 4407178 ns Mar 14 00:36:06.074355 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:36:06.074374 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:36:06.074391 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 14 00:36:06.074409 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:36:06.074428 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 14 00:36:06.074446 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 14 00:36:06.074469 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 14 00:36:06.074487 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 14 00:36:06.074505 systemd[1]: Stopped systemd-fsck-usr.service. Mar 14 00:36:06.074612 kernel: fuse: init (API version 7.39) Mar 14 00:36:06.074632 kernel: loop: module loaded Mar 14 00:36:06.074649 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 14 00:36:06.074669 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 14 00:36:06.074687 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 14 00:36:06.074703 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 14 00:36:06.074775 systemd-journald[1140]: Collecting audit messages is disabled. Mar 14 00:36:06.075896 kernel: ACPI: bus type drm_connector registered Mar 14 00:36:06.076187 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 14 00:36:06.077849 systemd[1]: verity-setup.service: Deactivated successfully. Mar 14 00:36:06.077874 systemd[1]: Stopped verity-setup.service. Mar 14 00:36:06.077895 systemd-journald[1140]: Journal started Mar 14 00:36:06.077930 systemd-journald[1140]: Runtime Journal (/run/log/journal/1d6f32e043594f26b5f110431445dd93) is 6.0M, max 48.4M, 42.3M free. Mar 14 00:36:03.768738 systemd[1]: Queued start job for default target multi-user.target. Mar 14 00:36:03.806104 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 14 00:36:03.807306 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 14 00:36:03.807954 systemd[1]: systemd-journald.service: Consumed 2.186s CPU time. Mar 14 00:36:06.100792 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:36:06.118161 systemd[1]: Started systemd-journald.service - Journal Service. Mar 14 00:36:06.121807 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 14 00:36:06.171965 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 14 00:36:06.181488 systemd[1]: Mounted media.mount - External Media Directory. Mar 14 00:36:06.191413 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 14 00:36:06.207606 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 14 00:36:06.219832 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 14 00:36:06.234084 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 14 00:36:06.259139 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 14 00:36:06.273646 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 14 00:36:06.275468 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 14 00:36:06.305852 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:36:06.307079 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:36:06.356315 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:36:06.357022 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:36:06.365397 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:36:06.366449 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:36:06.377181 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 14 00:36:06.377669 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 14 00:36:06.387851 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:36:06.388330 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:36:06.400870 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 14 00:36:06.413361 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 14 00:36:06.426060 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 14 00:36:06.510809 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 14 00:36:06.541763 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 14 00:36:06.588359 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 14 00:36:06.607656 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 14 00:36:06.607816 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 14 00:36:06.637509 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 14 00:36:06.673701 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 14 00:36:06.689674 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 14 00:36:06.697821 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:36:06.709016 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 14 00:36:06.755707 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 14 00:36:06.768343 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:36:06.777590 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 14 00:36:06.786875 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:36:06.800090 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:36:06.826751 systemd-journald[1140]: Time spent on flushing to /var/log/journal/1d6f32e043594f26b5f110431445dd93 is 67.200ms for 946 entries. Mar 14 00:36:06.826751 systemd-journald[1140]: System Journal (/var/log/journal/1d6f32e043594f26b5f110431445dd93) is 8.0M, max 195.6M, 187.6M free. Mar 14 00:36:06.961208 systemd-journald[1140]: Received client request to flush runtime journal. Mar 14 00:36:06.825488 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 14 00:36:06.906286 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 14 00:36:06.920715 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 14 00:36:06.956508 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 14 00:36:06.975142 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 14 00:36:06.990127 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 14 00:36:07.019303 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 14 00:36:07.030743 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 14 00:36:07.065027 kernel: loop0: detected capacity change from 0 to 140768 Mar 14 00:36:07.069720 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:36:07.089873 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 14 00:36:07.120934 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 14 00:36:07.186146 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 14 00:36:07.257011 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 14 00:36:07.291455 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 14 00:36:07.311678 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 14 00:36:07.335185 udevadm[1186]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 14 00:36:07.357333 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 14 00:36:07.407766 kernel: loop1: detected capacity change from 0 to 142488 Mar 14 00:36:07.400103 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 14 00:36:07.628891 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Mar 14 00:36:07.629011 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Mar 14 00:36:07.631395 kernel: loop2: detected capacity change from 0 to 219192 Mar 14 00:36:07.666046 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 14 00:36:07.936866 kernel: loop3: detected capacity change from 0 to 140768 Mar 14 00:36:08.133610 kernel: loop4: detected capacity change from 0 to 142488 Mar 14 00:36:08.270646 kernel: loop5: detected capacity change from 0 to 219192 Mar 14 00:36:08.387040 (sd-merge)[1195]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 14 00:36:08.388608 (sd-merge)[1195]: Merged extensions into '/usr'. Mar 14 00:36:08.411118 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Mar 14 00:36:08.411823 systemd[1]: Reloading... Mar 14 00:36:08.917299 zram_generator::config[1221]: No configuration found. Mar 14 00:36:09.385726 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:36:09.574617 systemd[1]: Reloading finished in 1161 ms. Mar 14 00:36:09.583435 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 14 00:36:09.654821 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 14 00:36:09.662509 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 14 00:36:09.671471 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 14 00:36:09.708669 systemd[1]: Starting ensure-sysext.service... Mar 14 00:36:09.724773 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 14 00:36:09.754898 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 14 00:36:09.766130 systemd[1]: Reloading requested from client PID 1259 ('systemctl') (unit ensure-sysext.service)... Mar 14 00:36:09.766149 systemd[1]: Reloading... Mar 14 00:36:09.801690 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 14 00:36:09.802292 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 14 00:36:09.804038 systemd-tmpfiles[1260]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 14 00:36:09.807680 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 14 00:36:09.807847 systemd-tmpfiles[1260]: ACLs are not supported, ignoring. Mar 14 00:36:09.819729 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:36:09.819748 systemd-tmpfiles[1260]: Skipping /boot Mar 14 00:36:09.839038 systemd-udevd[1261]: Using default interface naming scheme 'v255'. Mar 14 00:36:09.879967 systemd-tmpfiles[1260]: Detected autofs mount point /boot during canonicalization of boot. Mar 14 00:36:09.879989 systemd-tmpfiles[1260]: Skipping /boot Mar 14 00:36:09.913607 zram_generator::config[1293]: No configuration found. Mar 14 00:36:10.074493 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1304) Mar 14 00:36:10.147498 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:36:10.151732 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 14 00:36:10.169477 kernel: ACPI: button: Power Button [PWRF] Mar 14 00:36:10.320483 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 14 00:36:10.326921 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 14 00:36:10.327123 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 14 00:36:10.331001 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 14 00:36:10.331080 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 14 00:36:10.342028 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 14 00:36:10.379916 systemd[1]: Reloading finished in 612 ms. Mar 14 00:36:10.390957 kernel: mousedev: PS/2 mouse device common for all mice Mar 14 00:36:10.465099 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 14 00:36:10.674777 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 14 00:36:10.775024 systemd[1]: Finished ensure-sysext.service. Mar 14 00:36:10.787365 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:36:10.866716 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:36:10.886218 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 14 00:36:10.892237 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 14 00:36:10.897854 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 14 00:36:10.929140 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 14 00:36:10.982321 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 14 00:36:11.001700 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 14 00:36:11.009156 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 14 00:36:11.033230 kernel: kvm_amd: TSC scaling supported Mar 14 00:36:11.033385 kernel: kvm_amd: Nested Virtualization enabled Mar 14 00:36:11.033414 kernel: kvm_amd: Nested Paging enabled Mar 14 00:36:11.044199 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 14 00:36:11.044358 kernel: kvm_amd: PMU virtualization is disabled Mar 14 00:36:11.044408 augenrules[1379]: No rules Mar 14 00:36:11.046184 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 14 00:36:11.056752 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 14 00:36:11.078146 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 14 00:36:11.088437 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 14 00:36:11.097987 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 14 00:36:11.102169 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 14 00:36:11.112090 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 14 00:36:11.118743 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 14 00:36:11.122667 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:36:11.133650 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 14 00:36:11.134063 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 14 00:36:11.147512 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 14 00:36:11.157198 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 14 00:36:11.157681 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 14 00:36:11.164931 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 14 00:36:11.165299 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 14 00:36:11.173606 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 14 00:36:11.173936 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 14 00:36:11.181177 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 14 00:36:11.191498 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 14 00:36:11.228428 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 14 00:36:11.228983 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 14 00:36:11.325897 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 14 00:36:11.354765 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 14 00:36:11.428690 kernel: EDAC MC: Ver: 3.0.0 Mar 14 00:36:11.583839 systemd-networkd[1385]: lo: Link UP Mar 14 00:36:11.583856 systemd-networkd[1385]: lo: Gained carrier Mar 14 00:36:11.587628 systemd-networkd[1385]: Enumeration completed Mar 14 00:36:11.589184 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:36:11.589357 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 14 00:36:11.594023 systemd-networkd[1385]: eth0: Link UP Mar 14 00:36:11.594138 systemd-networkd[1385]: eth0: Gained carrier Mar 14 00:36:11.594220 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 14 00:36:11.599902 systemd-resolved[1387]: Positive Trust Anchors: Mar 14 00:36:11.599971 systemd-resolved[1387]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 14 00:36:11.600017 systemd-resolved[1387]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 14 00:36:11.611142 systemd-resolved[1387]: Defaulting to hostname 'linux'. Mar 14 00:36:11.630699 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 14 00:36:11.632756 systemd-timesyncd[1388]: Network configuration changed, trying to establish connection. Mar 14 00:36:12.442488 systemd-resolved[1387]: Clock change detected. Flushing caches. Mar 14 00:36:12.442594 systemd-timesyncd[1388]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 14 00:36:12.442672 systemd-timesyncd[1388]: Initial clock synchronization to Sat 2026-03-14 00:36:12.442295 UTC. Mar 14 00:36:12.488748 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 14 00:36:12.492935 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 14 00:36:12.503622 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 14 00:36:12.515478 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 14 00:36:12.522366 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 14 00:36:12.529240 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 14 00:36:12.535967 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 14 00:36:12.544609 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 14 00:36:12.551657 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 14 00:36:12.568733 systemd[1]: Reached target network.target - Network. Mar 14 00:36:12.573644 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 14 00:36:12.580975 systemd[1]: Reached target time-set.target - System Time Set. Mar 14 00:36:12.601826 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 14 00:36:12.616373 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 14 00:36:12.637054 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:36:12.696301 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 14 00:36:12.706357 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 14 00:36:12.717409 systemd[1]: Reached target sysinit.target - System Initialization. Mar 14 00:36:12.723791 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 14 00:36:12.730425 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 14 00:36:12.737539 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 14 00:36:12.743398 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 14 00:36:12.750917 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 14 00:36:12.757302 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 14 00:36:12.757353 systemd[1]: Reached target paths.target - Path Units. Mar 14 00:36:12.762064 systemd[1]: Reached target timers.target - Timer Units. Mar 14 00:36:12.769317 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 14 00:36:12.779611 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 14 00:36:12.797671 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 14 00:36:12.811725 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 14 00:36:12.819812 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 14 00:36:12.826032 systemd[1]: Reached target sockets.target - Socket Units. Mar 14 00:36:12.831330 systemd[1]: Reached target basic.target - Basic System. Mar 14 00:36:12.837495 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 14 00:36:12.837581 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:36:12.837629 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 14 00:36:12.841810 systemd[1]: Starting containerd.service - containerd container runtime... Mar 14 00:36:12.853226 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 14 00:36:12.862604 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 14 00:36:12.873491 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 14 00:36:12.879019 jq[1429]: false Mar 14 00:36:12.879684 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 14 00:36:12.884579 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 14 00:36:12.896042 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 14 00:36:12.906480 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 14 00:36:12.917677 extend-filesystems[1430]: Found loop3 Mar 14 00:36:12.917677 extend-filesystems[1430]: Found loop4 Mar 14 00:36:12.917677 extend-filesystems[1430]: Found loop5 Mar 14 00:36:12.929884 extend-filesystems[1430]: Found sr0 Mar 14 00:36:12.929884 extend-filesystems[1430]: Found vda Mar 14 00:36:12.929884 extend-filesystems[1430]: Found vda1 Mar 14 00:36:12.929884 extend-filesystems[1430]: Found vda2 Mar 14 00:36:12.929884 extend-filesystems[1430]: Found vda3 Mar 14 00:36:12.929884 extend-filesystems[1430]: Found usr Mar 14 00:36:12.929884 extend-filesystems[1430]: Found vda4 Mar 14 00:36:12.929884 extend-filesystems[1430]: Found vda6 Mar 14 00:36:12.929884 extend-filesystems[1430]: Found vda7 Mar 14 00:36:12.929884 extend-filesystems[1430]: Found vda9 Mar 14 00:36:12.929884 extend-filesystems[1430]: Checking size of /dev/vda9 Mar 14 00:36:13.079671 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 14 00:36:13.079751 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1306) Mar 14 00:36:12.926417 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 14 00:36:13.080036 extend-filesystems[1430]: Resized partition /dev/vda9 Mar 14 00:36:12.950594 dbus-daemon[1428]: [system] SELinux support is enabled Mar 14 00:36:12.959743 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 14 00:36:13.099448 extend-filesystems[1445]: resize2fs 1.47.1 (20-May-2024) Mar 14 00:36:12.960744 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 14 00:36:12.961787 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 14 00:36:12.973615 systemd[1]: Starting update-engine.service - Update Engine... Mar 14 00:36:13.110551 update_engine[1447]: I20260314 00:36:13.082773 1447 main.cc:92] Flatcar Update Engine starting Mar 14 00:36:13.110551 update_engine[1447]: I20260314 00:36:13.088292 1447 update_check_scheduler.cc:74] Next update check in 5m15s Mar 14 00:36:12.987272 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 14 00:36:12.999549 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 14 00:36:13.112608 jq[1449]: true Mar 14 00:36:13.011207 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 14 00:36:13.049074 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 14 00:36:13.049530 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 14 00:36:13.050218 systemd[1]: motdgen.service: Deactivated successfully. Mar 14 00:36:13.050533 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 14 00:36:13.089900 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 14 00:36:13.090404 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 14 00:36:13.119267 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 14 00:36:13.124601 (ntainerd)[1456]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 14 00:36:13.171978 jq[1455]: true Mar 14 00:36:13.172824 extend-filesystems[1445]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 14 00:36:13.172824 extend-filesystems[1445]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 14 00:36:13.172824 extend-filesystems[1445]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 14 00:36:13.175903 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 14 00:36:13.218725 extend-filesystems[1430]: Resized filesystem in /dev/vda9 Mar 14 00:36:13.180272 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 14 00:36:13.280212 tar[1454]: linux-amd64/LICENSE Mar 14 00:36:13.280212 tar[1454]: linux-amd64/helm Mar 14 00:36:13.294310 systemd[1]: Started update-engine.service - Update Engine. Mar 14 00:36:13.311971 systemd-logind[1446]: Watching system buttons on /dev/input/event1 (Power Button) Mar 14 00:36:13.312063 systemd-logind[1446]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 14 00:36:13.313665 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 14 00:36:13.332625 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 14 00:36:13.332712 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 14 00:36:13.332992 systemd-logind[1446]: New seat seat0. Mar 14 00:36:13.352749 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 14 00:36:13.352789 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 14 00:36:13.381451 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 14 00:36:13.391556 systemd[1]: Started systemd-logind.service - User Login Management. Mar 14 00:36:13.423999 bash[1485]: Updated "/home/core/.ssh/authorized_keys" Mar 14 00:36:13.428054 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 14 00:36:13.438925 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 14 00:36:13.456478 locksmithd[1484]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 14 00:36:13.649287 containerd[1456]: time="2026-03-14T00:36:13.647939594Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 14 00:36:13.680252 systemd-networkd[1385]: eth0: Gained IPv6LL Mar 14 00:36:13.693821 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 14 00:36:13.706926 systemd[1]: Reached target network-online.target - Network is Online. Mar 14 00:36:13.722730 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 14 00:36:13.729819 containerd[1456]: time="2026-03-14T00:36:13.725911280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:36:13.735408 containerd[1456]: time="2026-03-14T00:36:13.735356033Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:36:13.736556 containerd[1456]: time="2026-03-14T00:36:13.736254390Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 14 00:36:13.736556 containerd[1456]: time="2026-03-14T00:36:13.736451639Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 14 00:36:13.737321 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:36:13.745069 containerd[1456]: time="2026-03-14T00:36:13.738788010Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 14 00:36:13.745069 containerd[1456]: time="2026-03-14T00:36:13.738819900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 14 00:36:13.745069 containerd[1456]: time="2026-03-14T00:36:13.739002000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:36:13.745069 containerd[1456]: time="2026-03-14T00:36:13.739022908Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:36:13.745069 containerd[1456]: time="2026-03-14T00:36:13.739387961Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:36:13.745069 containerd[1456]: time="2026-03-14T00:36:13.739412235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 14 00:36:13.745069 containerd[1456]: time="2026-03-14T00:36:13.739429849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:36:13.745069 containerd[1456]: time="2026-03-14T00:36:13.739443774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 14 00:36:13.745069 containerd[1456]: time="2026-03-14T00:36:13.739559260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:36:13.745069 containerd[1456]: time="2026-03-14T00:36:13.739929331Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 14 00:36:13.745069 containerd[1456]: time="2026-03-14T00:36:13.740089660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 14 00:36:13.745723 containerd[1456]: time="2026-03-14T00:36:13.741244376Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 14 00:36:13.753098 containerd[1456]: time="2026-03-14T00:36:13.749407686Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 14 00:36:13.753098 containerd[1456]: time="2026-03-14T00:36:13.749795030Z" level=info msg="metadata content store policy set" policy=shared Mar 14 00:36:13.751556 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 14 00:36:13.786549 containerd[1456]: time="2026-03-14T00:36:13.786445103Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 14 00:36:13.787180 containerd[1456]: time="2026-03-14T00:36:13.786974321Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 14 00:36:13.787232 containerd[1456]: time="2026-03-14T00:36:13.787204460Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 14 00:36:13.787267 containerd[1456]: time="2026-03-14T00:36:13.787231602Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 14 00:36:13.787267 containerd[1456]: time="2026-03-14T00:36:13.787249535Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 14 00:36:13.787531 containerd[1456]: time="2026-03-14T00:36:13.787439350Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 14 00:36:13.794184 containerd[1456]: time="2026-03-14T00:36:13.789262032Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 14 00:36:13.794184 containerd[1456]: time="2026-03-14T00:36:13.790053810Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 14 00:36:13.794184 containerd[1456]: time="2026-03-14T00:36:13.790078536Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 14 00:36:13.794184 containerd[1456]: time="2026-03-14T00:36:13.790234849Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 14 00:36:13.794184 containerd[1456]: time="2026-03-14T00:36:13.790261548Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 14 00:36:13.794184 containerd[1456]: time="2026-03-14T00:36:13.790411969Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 14 00:36:13.794184 containerd[1456]: time="2026-03-14T00:36:13.790556980Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 14 00:36:13.794184 containerd[1456]: time="2026-03-14T00:36:13.790750261Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 14 00:36:13.794184 containerd[1456]: time="2026-03-14T00:36:13.790955103Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 14 00:36:13.794184 containerd[1456]: time="2026-03-14T00:36:13.791073624Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 14 00:36:13.794184 containerd[1456]: time="2026-03-14T00:36:13.791099612Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 14 00:36:13.794184 containerd[1456]: time="2026-03-14T00:36:13.791236799Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 14 00:36:13.794184 containerd[1456]: time="2026-03-14T00:36:13.791281272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 14 00:36:13.794184 containerd[1456]: time="2026-03-14T00:36:13.791307491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 14 00:36:13.794524 containerd[1456]: time="2026-03-14T00:36:13.791330564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 14 00:36:13.794524 containerd[1456]: time="2026-03-14T00:36:13.791354098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 14 00:36:13.794524 containerd[1456]: time="2026-03-14T00:36:13.791381418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 14 00:36:13.794524 containerd[1456]: time="2026-03-14T00:36:13.791406375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 14 00:36:13.794524 containerd[1456]: time="2026-03-14T00:36:13.791426102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 14 00:36:13.794524 containerd[1456]: time="2026-03-14T00:36:13.791447452Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 14 00:36:13.794524 containerd[1456]: time="2026-03-14T00:36:13.791469222Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 14 00:36:13.794524 containerd[1456]: time="2026-03-14T00:36:13.791492957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 14 00:36:13.794524 containerd[1456]: time="2026-03-14T00:36:13.791514146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 14 00:36:13.794524 containerd[1456]: time="2026-03-14T00:36:13.791533533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 14 00:36:13.794524 containerd[1456]: time="2026-03-14T00:36:13.791554312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 14 00:36:13.794524 containerd[1456]: time="2026-03-14T00:36:13.791577505Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 14 00:36:13.794524 containerd[1456]: time="2026-03-14T00:36:13.791612530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 14 00:36:13.794524 containerd[1456]: time="2026-03-14T00:36:13.791634181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 14 00:36:13.794524 containerd[1456]: time="2026-03-14T00:36:13.791650160Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 14 00:36:13.795027 containerd[1456]: time="2026-03-14T00:36:13.791712998Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 14 00:36:13.795027 containerd[1456]: time="2026-03-14T00:36:13.791737123Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 14 00:36:13.795027 containerd[1456]: time="2026-03-14T00:36:13.791750658Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 14 00:36:13.795027 containerd[1456]: time="2026-03-14T00:36:13.791766999Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 14 00:36:13.795027 containerd[1456]: time="2026-03-14T00:36:13.791782768Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 14 00:36:13.795027 containerd[1456]: time="2026-03-14T00:36:13.791805701Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 14 00:36:13.795027 containerd[1456]: time="2026-03-14T00:36:13.791832260Z" level=info msg="NRI interface is disabled by configuration." Mar 14 00:36:13.795027 containerd[1456]: time="2026-03-14T00:36:13.791922168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 14 00:36:13.795387 containerd[1456]: time="2026-03-14T00:36:13.792573194Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 14 00:36:13.795387 containerd[1456]: time="2026-03-14T00:36:13.792666839Z" level=info msg="Connect containerd service" Mar 14 00:36:13.795387 containerd[1456]: time="2026-03-14T00:36:13.792724927Z" level=info msg="using legacy CRI server" Mar 14 00:36:13.795387 containerd[1456]: time="2026-03-14T00:36:13.792734294Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 14 00:36:13.795387 containerd[1456]: time="2026-03-14T00:36:13.792912648Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 14 00:36:13.807207 containerd[1456]: time="2026-03-14T00:36:13.796788123Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:36:13.807207 containerd[1456]: time="2026-03-14T00:36:13.797175175Z" level=info msg="Start subscribing containerd event" Mar 14 00:36:13.807207 containerd[1456]: time="2026-03-14T00:36:13.797229527Z" level=info msg="Start recovering state" Mar 14 00:36:13.807207 containerd[1456]: time="2026-03-14T00:36:13.797314896Z" level=info msg="Start event monitor" Mar 14 00:36:13.807207 containerd[1456]: time="2026-03-14T00:36:13.797342748Z" level=info msg="Start snapshots syncer" Mar 14 00:36:13.807207 containerd[1456]: time="2026-03-14T00:36:13.797356203Z" level=info msg="Start cni network conf syncer for default" Mar 14 00:36:13.807207 containerd[1456]: time="2026-03-14T00:36:13.797367945Z" level=info msg="Start streaming server" Mar 14 00:36:13.807207 containerd[1456]: time="2026-03-14T00:36:13.798565741Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 14 00:36:13.807207 containerd[1456]: time="2026-03-14T00:36:13.799086393Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 14 00:36:13.807207 containerd[1456]: time="2026-03-14T00:36:13.802614099Z" level=info msg="containerd successfully booted in 0.158823s" Mar 14 00:36:13.814080 systemd[1]: Started containerd.service - containerd container runtime. Mar 14 00:36:13.863097 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 14 00:36:13.875698 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 14 00:36:13.883738 sshd_keygen[1452]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 14 00:36:13.876085 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 14 00:36:13.893333 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 14 00:36:13.944608 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 14 00:36:13.969682 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 14 00:36:13.980414 systemd[1]: Started sshd@0-10.0.0.131:22-10.0.0.1:57544.service - OpenSSH per-connection server daemon (10.0.0.1:57544). Mar 14 00:36:14.016461 systemd[1]: issuegen.service: Deactivated successfully. Mar 14 00:36:14.017048 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 14 00:36:14.046761 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 14 00:36:14.115946 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 14 00:36:14.127431 sshd[1528]: Accepted publickey for core from 10.0.0.1 port 57544 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:36:14.142080 sshd[1528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:36:14.170225 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 14 00:36:14.207471 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 14 00:36:14.230608 systemd[1]: Reached target getty.target - Login Prompts. Mar 14 00:36:14.299646 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 14 00:36:14.331791 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 14 00:36:14.363690 systemd-logind[1446]: New session 1 of user core. Mar 14 00:36:14.411557 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 14 00:36:14.446402 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 14 00:36:14.460420 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 14 00:36:14.526043 tar[1454]: linux-amd64/README.md Mar 14 00:36:14.577262 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 14 00:36:14.689390 systemd[1539]: Queued start job for default target default.target. Mar 14 00:36:14.708776 systemd[1539]: Created slice app.slice - User Application Slice. Mar 14 00:36:14.708826 systemd[1539]: Reached target paths.target - Paths. Mar 14 00:36:14.709200 systemd[1539]: Reached target timers.target - Timers. Mar 14 00:36:14.716260 systemd[1539]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 14 00:36:14.761634 systemd[1539]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 14 00:36:14.764439 systemd[1539]: Reached target sockets.target - Sockets. Mar 14 00:36:14.764470 systemd[1539]: Reached target basic.target - Basic System. Mar 14 00:36:14.764603 systemd[1539]: Reached target default.target - Main User Target. Mar 14 00:36:14.764670 systemd[1539]: Startup finished in 289ms. Mar 14 00:36:14.764767 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 14 00:36:14.795902 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 14 00:36:14.887938 systemd[1]: Started sshd@1-10.0.0.131:22-10.0.0.1:57558.service - OpenSSH per-connection server daemon (10.0.0.1:57558). Mar 14 00:36:14.985677 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 57558 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:36:14.991766 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:36:15.013091 systemd-logind[1446]: New session 2 of user core. Mar 14 00:36:15.034737 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 14 00:36:15.157720 sshd[1553]: pam_unix(sshd:session): session closed for user core Mar 14 00:36:15.178391 systemd[1]: sshd@1-10.0.0.131:22-10.0.0.1:57558.service: Deactivated successfully. Mar 14 00:36:15.183698 systemd[1]: session-2.scope: Deactivated successfully. Mar 14 00:36:15.187782 systemd-logind[1446]: Session 2 logged out. Waiting for processes to exit. Mar 14 00:36:15.199598 systemd[1]: Started sshd@2-10.0.0.131:22-10.0.0.1:57574.service - OpenSSH per-connection server daemon (10.0.0.1:57574). Mar 14 00:36:15.213741 systemd-logind[1446]: Removed session 2. Mar 14 00:36:15.315564 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 57574 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:36:15.320902 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:36:15.341680 systemd-logind[1446]: New session 3 of user core. Mar 14 00:36:15.359594 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 14 00:36:15.452732 sshd[1560]: pam_unix(sshd:session): session closed for user core Mar 14 00:36:15.462406 systemd[1]: sshd@2-10.0.0.131:22-10.0.0.1:57574.service: Deactivated successfully. Mar 14 00:36:15.468495 systemd[1]: session-3.scope: Deactivated successfully. Mar 14 00:36:15.474083 systemd-logind[1446]: Session 3 logged out. Waiting for processes to exit. Mar 14 00:36:15.478782 systemd-logind[1446]: Removed session 3. Mar 14 00:36:15.790643 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:36:15.791236 (kubelet)[1572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:36:15.801443 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 14 00:36:15.826220 systemd[1]: Startup finished in 3.290s (kernel) + 12.730s (initrd) + 12.415s (userspace) = 28.435s. Mar 14 00:36:18.082303 kubelet[1572]: E0314 00:36:18.080360 1572 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:36:18.089693 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:36:18.090065 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:36:18.091031 systemd[1]: kubelet.service: Consumed 2.002s CPU time. Mar 14 00:36:25.517035 systemd[1]: Started sshd@3-10.0.0.131:22-10.0.0.1:36386.service - OpenSSH per-connection server daemon (10.0.0.1:36386). Mar 14 00:36:25.634694 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 36386 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:36:25.649700 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:36:25.677302 systemd-logind[1446]: New session 4 of user core. Mar 14 00:36:25.696946 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 14 00:36:25.786493 sshd[1587]: pam_unix(sshd:session): session closed for user core Mar 14 00:36:25.805251 systemd[1]: sshd@3-10.0.0.131:22-10.0.0.1:36386.service: Deactivated successfully. Mar 14 00:36:25.810797 systemd[1]: session-4.scope: Deactivated successfully. Mar 14 00:36:25.817329 systemd-logind[1446]: Session 4 logged out. Waiting for processes to exit. Mar 14 00:36:25.835073 systemd[1]: Started sshd@4-10.0.0.131:22-10.0.0.1:36398.service - OpenSSH per-connection server daemon (10.0.0.1:36398). Mar 14 00:36:25.840478 systemd-logind[1446]: Removed session 4. Mar 14 00:36:25.951215 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 36398 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:36:25.953850 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:36:25.969507 systemd-logind[1446]: New session 5 of user core. Mar 14 00:36:25.979684 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 14 00:36:26.054721 sshd[1594]: pam_unix(sshd:session): session closed for user core Mar 14 00:36:26.070492 systemd[1]: sshd@4-10.0.0.131:22-10.0.0.1:36398.service: Deactivated successfully. Mar 14 00:36:26.073516 systemd[1]: session-5.scope: Deactivated successfully. Mar 14 00:36:26.076521 systemd-logind[1446]: Session 5 logged out. Waiting for processes to exit. Mar 14 00:36:26.088953 systemd[1]: Started sshd@5-10.0.0.131:22-10.0.0.1:36410.service - OpenSSH per-connection server daemon (10.0.0.1:36410). Mar 14 00:36:26.093067 systemd-logind[1446]: Removed session 5. Mar 14 00:36:26.144447 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 36410 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:36:26.151300 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:36:26.165514 systemd-logind[1446]: New session 6 of user core. Mar 14 00:36:26.176505 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 14 00:36:26.280872 sshd[1601]: pam_unix(sshd:session): session closed for user core Mar 14 00:36:26.294189 systemd[1]: sshd@5-10.0.0.131:22-10.0.0.1:36410.service: Deactivated successfully. Mar 14 00:36:26.300564 systemd[1]: session-6.scope: Deactivated successfully. Mar 14 00:36:26.306860 systemd-logind[1446]: Session 6 logged out. Waiting for processes to exit. Mar 14 00:36:26.319787 systemd[1]: Started sshd@6-10.0.0.131:22-10.0.0.1:36414.service - OpenSSH per-connection server daemon (10.0.0.1:36414). Mar 14 00:36:26.321729 systemd-logind[1446]: Removed session 6. Mar 14 00:36:26.373474 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 36414 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:36:26.376473 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:36:26.389231 systemd-logind[1446]: New session 7 of user core. Mar 14 00:36:26.400678 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 14 00:36:26.544863 sudo[1611]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 14 00:36:26.545692 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:36:26.632269 sudo[1611]: pam_unix(sudo:session): session closed for user root Mar 14 00:36:26.637323 sshd[1608]: pam_unix(sshd:session): session closed for user core Mar 14 00:36:26.652203 systemd[1]: sshd@6-10.0.0.131:22-10.0.0.1:36414.service: Deactivated successfully. Mar 14 00:36:26.655324 systemd[1]: session-7.scope: Deactivated successfully. Mar 14 00:36:26.657751 systemd-logind[1446]: Session 7 logged out. Waiting for processes to exit. Mar 14 00:36:26.673346 systemd[1]: Started sshd@7-10.0.0.131:22-10.0.0.1:36422.service - OpenSSH per-connection server daemon (10.0.0.1:36422). Mar 14 00:36:26.677487 systemd-logind[1446]: Removed session 7. Mar 14 00:36:26.738945 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 36422 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:36:26.744096 sshd[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:36:26.758833 systemd-logind[1446]: New session 8 of user core. Mar 14 00:36:26.772538 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 14 00:36:26.860835 sudo[1621]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 14 00:36:26.861658 sudo[1621]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:36:26.873320 sudo[1621]: pam_unix(sudo:session): session closed for user root Mar 14 00:36:26.887482 sudo[1620]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 14 00:36:26.889004 sudo[1620]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:36:26.932774 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 14 00:36:26.938691 auditctl[1624]: No rules Mar 14 00:36:26.939709 systemd[1]: audit-rules.service: Deactivated successfully. Mar 14 00:36:26.940483 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 14 00:36:26.958529 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 14 00:36:27.061960 augenrules[1642]: No rules Mar 14 00:36:27.065984 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 14 00:36:27.069757 sudo[1620]: pam_unix(sudo:session): session closed for user root Mar 14 00:36:27.075823 sshd[1616]: pam_unix(sshd:session): session closed for user core Mar 14 00:36:27.099669 systemd[1]: sshd@7-10.0.0.131:22-10.0.0.1:36422.service: Deactivated successfully. Mar 14 00:36:27.104443 systemd[1]: session-8.scope: Deactivated successfully. Mar 14 00:36:27.112353 systemd-logind[1446]: Session 8 logged out. Waiting for processes to exit. Mar 14 00:36:27.125989 systemd[1]: Started sshd@8-10.0.0.131:22-10.0.0.1:36436.service - OpenSSH per-connection server daemon (10.0.0.1:36436). Mar 14 00:36:27.130661 systemd-logind[1446]: Removed session 8. Mar 14 00:36:27.187973 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 36436 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:36:27.190445 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:36:27.203564 systemd-logind[1446]: New session 9 of user core. Mar 14 00:36:27.213408 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 14 00:36:27.303559 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 14 00:36:27.304333 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 14 00:36:28.423782 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 14 00:36:28.438526 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:36:29.135579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:36:29.185279 (kubelet)[1678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:36:30.039019 kubelet[1678]: E0314 00:36:30.038658 1678 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:36:30.055843 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:36:30.056328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:36:30.056807 systemd[1]: kubelet.service: Consumed 1.702s CPU time. Mar 14 00:36:30.125746 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 14 00:36:30.127226 (dockerd)[1689]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 14 00:36:34.371733 dockerd[1689]: time="2026-03-14T00:36:34.370751858Z" level=info msg="Starting up" Mar 14 00:36:35.859227 dockerd[1689]: time="2026-03-14T00:36:35.857809701Z" level=info msg="Loading containers: start." Mar 14 00:36:36.416309 kernel: Initializing XFRM netlink socket Mar 14 00:36:36.691616 systemd-networkd[1385]: docker0: Link UP Mar 14 00:36:36.755976 dockerd[1689]: time="2026-03-14T00:36:36.755812426Z" level=info msg="Loading containers: done." Mar 14 00:36:36.824547 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck122863682-merged.mount: Deactivated successfully. Mar 14 00:36:36.833543 dockerd[1689]: time="2026-03-14T00:36:36.833319575Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 14 00:36:36.833754 dockerd[1689]: time="2026-03-14T00:36:36.833691229Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 14 00:36:36.834784 dockerd[1689]: time="2026-03-14T00:36:36.834658706Z" level=info msg="Daemon has completed initialization" Mar 14 00:36:37.225354 dockerd[1689]: time="2026-03-14T00:36:37.224758403Z" level=info msg="API listen on /run/docker.sock" Mar 14 00:36:37.226889 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 14 00:36:40.308900 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 14 00:36:40.353657 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:36:42.233457 containerd[1456]: time="2026-03-14T00:36:42.232443841Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 14 00:36:42.535212 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:36:42.560508 (kubelet)[1842]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:36:43.141353 kubelet[1842]: E0314 00:36:43.141063 1842 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:36:43.167865 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:36:43.168379 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:36:43.169335 systemd[1]: kubelet.service: Consumed 2.299s CPU time. Mar 14 00:36:43.616004 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1484867144.mount: Deactivated successfully. Mar 14 00:36:48.253258 containerd[1456]: time="2026-03-14T00:36:48.252871216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:48.256286 containerd[1456]: time="2026-03-14T00:36:48.256065736Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 14 00:36:48.258755 containerd[1456]: time="2026-03-14T00:36:48.258261831Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:48.263533 containerd[1456]: time="2026-03-14T00:36:48.263335397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:48.266227 containerd[1456]: time="2026-03-14T00:36:48.265990675Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 6.033492863s" Mar 14 00:36:48.266227 containerd[1456]: time="2026-03-14T00:36:48.266179355Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 14 00:36:48.269969 containerd[1456]: time="2026-03-14T00:36:48.269834479Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 14 00:36:53.310669 containerd[1456]: time="2026-03-14T00:36:53.309574216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:53.312331 containerd[1456]: time="2026-03-14T00:36:53.312074528Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 14 00:36:53.317037 containerd[1456]: time="2026-03-14T00:36:53.316964430Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:53.319626 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 14 00:36:53.330303 containerd[1456]: time="2026-03-14T00:36:53.330075138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:53.331620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:36:53.332522 containerd[1456]: time="2026-03-14T00:36:53.332259345Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 5.062338275s" Mar 14 00:36:53.332522 containerd[1456]: time="2026-03-14T00:36:53.332301974Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 14 00:36:53.336788 containerd[1456]: time="2026-03-14T00:36:53.336394970Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 14 00:36:53.747745 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:36:53.750798 (kubelet)[1922]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:36:54.064539 kubelet[1922]: E0314 00:36:54.064236 1922 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:36:54.069711 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:36:54.070527 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:36:57.080094 containerd[1456]: time="2026-03-14T00:36:57.076926780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:57.087415 containerd[1456]: time="2026-03-14T00:36:57.087264656Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:57.088663 containerd[1456]: time="2026-03-14T00:36:57.087933505Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 14 00:36:57.097934 containerd[1456]: time="2026-03-14T00:36:57.097576600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:36:57.102753 containerd[1456]: time="2026-03-14T00:36:57.102408634Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 3.765960745s" Mar 14 00:36:57.102753 containerd[1456]: time="2026-03-14T00:36:57.102529148Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 14 00:36:57.110501 containerd[1456]: time="2026-03-14T00:36:57.110441760Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 14 00:36:58.417303 update_engine[1447]: I20260314 00:36:58.413450 1447 update_attempter.cc:509] Updating boot flags... Mar 14 00:36:59.165839 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1946) Mar 14 00:37:02.481589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3019977566.mount: Deactivated successfully. Mar 14 00:37:04.326731 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 14 00:37:04.337809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:37:04.747786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:37:04.780996 (kubelet)[1963]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:37:05.101063 kubelet[1963]: E0314 00:37:05.100614 1963 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:37:05.113474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:37:05.113807 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:37:05.534596 containerd[1456]: time="2026-03-14T00:37:05.533586104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:05.536933 containerd[1456]: time="2026-03-14T00:37:05.536798997Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 14 00:37:05.538643 containerd[1456]: time="2026-03-14T00:37:05.538561118Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:05.547287 containerd[1456]: time="2026-03-14T00:37:05.546945123Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:05.548242 containerd[1456]: time="2026-03-14T00:37:05.547898048Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 8.437351863s" Mar 14 00:37:05.548242 containerd[1456]: time="2026-03-14T00:37:05.547944384Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 14 00:37:05.551309 containerd[1456]: time="2026-03-14T00:37:05.551271741Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 14 00:37:06.366745 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2719259304.mount: Deactivated successfully. Mar 14 00:37:10.093586 containerd[1456]: time="2026-03-14T00:37:10.092778406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:10.096039 containerd[1456]: time="2026-03-14T00:37:10.094283909Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 14 00:37:10.098068 containerd[1456]: time="2026-03-14T00:37:10.097970031Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:10.107302 containerd[1456]: time="2026-03-14T00:37:10.106961549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:10.109698 containerd[1456]: time="2026-03-14T00:37:10.109542339Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 4.558134104s" Mar 14 00:37:10.109698 containerd[1456]: time="2026-03-14T00:37:10.109630715Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 14 00:37:10.114980 containerd[1456]: time="2026-03-14T00:37:10.114835625Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 14 00:37:10.890549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount714065086.mount: Deactivated successfully. Mar 14 00:37:10.939288 containerd[1456]: time="2026-03-14T00:37:10.937325365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:10.945514 containerd[1456]: time="2026-03-14T00:37:10.945258558Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 14 00:37:10.951313 containerd[1456]: time="2026-03-14T00:37:10.950579706Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:10.965050 containerd[1456]: time="2026-03-14T00:37:10.962463363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:10.965050 containerd[1456]: time="2026-03-14T00:37:10.963904288Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 848.848092ms" Mar 14 00:37:10.965050 containerd[1456]: time="2026-03-14T00:37:10.963949552Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 14 00:37:10.978381 containerd[1456]: time="2026-03-14T00:37:10.976748746Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 14 00:37:11.882267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3092500847.mount: Deactivated successfully. Mar 14 00:37:15.322654 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 14 00:37:15.347442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:37:15.743099 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:37:15.773546 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 14 00:37:16.019502 kubelet[2094]: E0314 00:37:16.018993 2094 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 14 00:37:16.026782 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 14 00:37:16.027251 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 14 00:37:16.353070 containerd[1456]: time="2026-03-14T00:37:16.351425747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:16.359616 containerd[1456]: time="2026-03-14T00:37:16.359197990Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 14 00:37:16.363920 containerd[1456]: time="2026-03-14T00:37:16.363362906Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:16.374486 containerd[1456]: time="2026-03-14T00:37:16.374079328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:37:16.377323 containerd[1456]: time="2026-03-14T00:37:16.377092841Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 5.400291508s" Mar 14 00:37:16.377323 containerd[1456]: time="2026-03-14T00:37:16.377287434Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 14 00:37:22.600807 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:37:22.622099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:37:22.671668 systemd[1]: Reloading requested from client PID 2139 ('systemctl') (unit session-9.scope)... Mar 14 00:37:22.671788 systemd[1]: Reloading... Mar 14 00:37:22.794186 zram_generator::config[2178]: No configuration found. Mar 14 00:37:23.037816 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:37:23.151250 systemd[1]: Reloading finished in 478 ms. Mar 14 00:37:23.231659 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 14 00:37:23.231840 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 14 00:37:23.232402 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:37:23.248792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:37:23.490001 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:37:23.498350 (kubelet)[2226]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:37:23.605520 kubelet[2226]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:37:23.605520 kubelet[2226]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:37:23.606064 kubelet[2226]: I0314 00:37:23.605550 2226 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:37:23.883688 kubelet[2226]: I0314 00:37:23.883590 2226 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 14 00:37:23.883688 kubelet[2226]: I0314 00:37:23.883662 2226 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:37:23.883943 kubelet[2226]: I0314 00:37:23.883727 2226 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:37:23.883943 kubelet[2226]: I0314 00:37:23.883749 2226 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:37:23.884510 kubelet[2226]: I0314 00:37:23.884439 2226 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:37:23.986089 kubelet[2226]: E0314 00:37:23.985980 2226 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:37:23.988526 kubelet[2226]: I0314 00:37:23.988456 2226 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:37:24.000281 kubelet[2226]: E0314 00:37:24.000089 2226 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:37:24.000281 kubelet[2226]: I0314 00:37:24.000249 2226 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:37:24.026280 kubelet[2226]: I0314 00:37:24.023313 2226 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:37:24.027089 kubelet[2226]: I0314 00:37:24.026997 2226 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:37:24.028278 kubelet[2226]: I0314 00:37:24.027068 2226 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:37:24.028278 kubelet[2226]: I0314 00:37:24.027412 2226 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:37:24.028278 kubelet[2226]: I0314 00:37:24.027428 2226 container_manager_linux.go:306] "Creating device plugin manager" Mar 14 00:37:24.028278 kubelet[2226]: I0314 00:37:24.027577 2226 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:37:24.035365 kubelet[2226]: I0314 00:37:24.035200 2226 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:37:24.035909 kubelet[2226]: I0314 00:37:24.035745 2226 kubelet.go:475] "Attempting to sync node with API server" Mar 14 00:37:24.037940 kubelet[2226]: I0314 00:37:24.037280 2226 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:37:24.037940 kubelet[2226]: E0314 00:37:24.037717 2226 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:37:24.039974 kubelet[2226]: I0314 00:37:24.038468 2226 kubelet.go:387] "Adding apiserver pod source" Mar 14 00:37:24.044187 kubelet[2226]: I0314 00:37:24.042970 2226 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:37:24.049498 kubelet[2226]: I0314 00:37:24.049413 2226 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:37:24.063768 kubelet[2226]: I0314 00:37:24.062249 2226 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:37:24.063768 kubelet[2226]: I0314 00:37:24.062311 2226 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:37:24.063768 kubelet[2226]: W0314 00:37:24.062468 2226 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 14 00:37:24.064666 kubelet[2226]: E0314 00:37:24.063796 2226 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:37:24.084228 kubelet[2226]: I0314 00:37:24.084081 2226 server.go:1262] "Started kubelet" Mar 14 00:37:24.084756 kubelet[2226]: I0314 00:37:24.084337 2226 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:37:24.086471 kubelet[2226]: I0314 00:37:24.086167 2226 server.go:310] "Adding debug handlers to kubelet server" Mar 14 00:37:24.088632 kubelet[2226]: I0314 00:37:24.088547 2226 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:37:24.093297 kubelet[2226]: I0314 00:37:24.092054 2226 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:37:24.093700 kubelet[2226]: I0314 00:37:24.093604 2226 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:37:24.093856 kubelet[2226]: I0314 00:37:24.093761 2226 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:37:24.094557 kubelet[2226]: I0314 00:37:24.094486 2226 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:37:24.096010 kubelet[2226]: I0314 00:37:24.095929 2226 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 14 00:37:24.096238 kubelet[2226]: I0314 00:37:24.096072 2226 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:37:24.096307 kubelet[2226]: I0314 00:37:24.096299 2226 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:37:24.096959 kubelet[2226]: E0314 00:37:24.096825 2226 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:37:24.097216 kubelet[2226]: E0314 00:37:24.096968 2226 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:24.097670 kubelet[2226]: E0314 00:37:24.097617 2226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="200ms" Mar 14 00:37:24.102209 kubelet[2226]: I0314 00:37:24.102188 2226 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:37:24.102372 kubelet[2226]: I0314 00:37:24.102354 2226 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:37:24.113412 kubelet[2226]: E0314 00:37:24.113029 2226 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:37:24.114545 kubelet[2226]: I0314 00:37:24.114474 2226 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:37:24.115362 kubelet[2226]: E0314 00:37:24.112291 2226 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.131:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.131:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189c8e254a06fd11 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-14 00:37:24.084038929 +0000 UTC m=+0.566722929,LastTimestamp:2026-03-14 00:37:24.084038929 +0000 UTC m=+0.566722929,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 14 00:37:24.181573 kubelet[2226]: I0314 00:37:24.180227 2226 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:37:24.183704 kubelet[2226]: I0314 00:37:24.183218 2226 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:37:24.183704 kubelet[2226]: I0314 00:37:24.183259 2226 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:37:24.197717 kubelet[2226]: I0314 00:37:24.195089 2226 policy_none.go:49] "None policy: Start" Mar 14 00:37:24.197717 kubelet[2226]: I0314 00:37:24.195304 2226 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:37:24.197717 kubelet[2226]: I0314 00:37:24.195328 2226 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:37:24.197717 kubelet[2226]: E0314 00:37:24.197373 2226 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 14 00:37:24.200067 kubelet[2226]: I0314 00:37:24.199970 2226 policy_none.go:47] "Start" Mar 14 00:37:24.201600 kubelet[2226]: I0314 00:37:24.201477 2226 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:37:24.213012 kubelet[2226]: I0314 00:37:24.212440 2226 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:37:24.213012 kubelet[2226]: I0314 00:37:24.212618 2226 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 14 00:37:24.213012 kubelet[2226]: I0314 00:37:24.212673 2226 kubelet.go:2428] "Starting kubelet main sync loop" Mar 14 00:37:24.213012 kubelet[2226]: E0314 00:37:24.212750 2226 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:37:24.213764 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 14 00:37:24.217004 kubelet[2226]: E0314 00:37:24.216259 2226 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:37:24.241594 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 14 00:37:24.246086 kubelet[2226]: W0314 00:37:24.245920 2226 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/cpuset.cpus.effective: no such device Mar 14 00:37:24.283613 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 14 00:37:24.292818 kubelet[2226]: E0314 00:37:24.291344 2226 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:37:24.292818 kubelet[2226]: I0314 00:37:24.291695 2226 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:37:24.292818 kubelet[2226]: I0314 00:37:24.291723 2226 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:37:24.298810 kubelet[2226]: I0314 00:37:24.293537 2226 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:37:24.306067 kubelet[2226]: E0314 00:37:24.305935 2226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="400ms" Mar 14 00:37:24.314813 kubelet[2226]: E0314 00:37:24.314726 2226 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:37:24.315205 kubelet[2226]: E0314 00:37:24.314839 2226 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 14 00:37:24.354495 systemd[1]: Created slice kubepods-burstable-podb75be6b571daaafbc1922f5c1b6f440e.slice - libcontainer container kubepods-burstable-podb75be6b571daaafbc1922f5c1b6f440e.slice. Mar 14 00:37:24.380786 kubelet[2226]: E0314 00:37:24.380663 2226 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:24.390989 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 14 00:37:24.398186 kubelet[2226]: I0314 00:37:24.397000 2226 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:37:24.398186 kubelet[2226]: E0314 00:37:24.397482 2226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Mar 14 00:37:24.398618 kubelet[2226]: I0314 00:37:24.398561 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:24.398819 kubelet[2226]: I0314 00:37:24.398767 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 14 00:37:24.399337 kubelet[2226]: I0314 00:37:24.398988 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b75be6b571daaafbc1922f5c1b6f440e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b75be6b571daaafbc1922f5c1b6f440e\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:24.399337 kubelet[2226]: I0314 00:37:24.399033 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b75be6b571daaafbc1922f5c1b6f440e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b75be6b571daaafbc1922f5c1b6f440e\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:24.399337 kubelet[2226]: I0314 00:37:24.399072 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:24.399337 kubelet[2226]: I0314 00:37:24.399183 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b75be6b571daaafbc1922f5c1b6f440e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b75be6b571daaafbc1922f5c1b6f440e\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:24.399337 kubelet[2226]: I0314 00:37:24.399222 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:24.399590 kubelet[2226]: I0314 00:37:24.399254 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:24.399590 kubelet[2226]: I0314 00:37:24.399296 2226 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:24.416563 kubelet[2226]: E0314 00:37:24.416239 2226 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:24.419585 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 14 00:37:24.424315 kubelet[2226]: E0314 00:37:24.424252 2226 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:24.626691 kubelet[2226]: I0314 00:37:24.625851 2226 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:37:24.626691 kubelet[2226]: E0314 00:37:24.626481 2226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Mar 14 00:37:24.697200 kubelet[2226]: E0314 00:37:24.697010 2226 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:24.700190 containerd[1456]: time="2026-03-14T00:37:24.699759352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b75be6b571daaafbc1922f5c1b6f440e,Namespace:kube-system,Attempt:0,}" Mar 14 00:37:24.709021 kubelet[2226]: E0314 00:37:24.707998 2226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="800ms" Mar 14 00:37:24.723734 kubelet[2226]: E0314 00:37:24.723513 2226 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:24.726085 containerd[1456]: time="2026-03-14T00:37:24.725542652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 14 00:37:24.732349 kubelet[2226]: E0314 00:37:24.731811 2226 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:24.732931 containerd[1456]: time="2026-03-14T00:37:24.732818562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 14 00:37:25.031089 kubelet[2226]: I0314 00:37:25.030264 2226 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:37:25.035460 kubelet[2226]: E0314 00:37:25.034370 2226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Mar 14 00:37:25.229690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3955331270.mount: Deactivated successfully. Mar 14 00:37:25.244773 containerd[1456]: time="2026-03-14T00:37:25.244659934Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:37:25.249046 kubelet[2226]: E0314 00:37:25.248542 2226 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.131:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 14 00:37:25.255944 containerd[1456]: time="2026-03-14T00:37:25.255672889Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 14 00:37:25.257980 containerd[1456]: time="2026-03-14T00:37:25.257786315Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:37:25.260196 containerd[1456]: time="2026-03-14T00:37:25.259974496Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:37:25.262325 containerd[1456]: time="2026-03-14T00:37:25.262199972Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:37:25.264632 containerd[1456]: time="2026-03-14T00:37:25.264553570Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:37:25.266499 containerd[1456]: time="2026-03-14T00:37:25.266437227Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 14 00:37:25.270707 containerd[1456]: time="2026-03-14T00:37:25.270621729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 14 00:37:25.272993 containerd[1456]: time="2026-03-14T00:37:25.272848093Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 546.371529ms" Mar 14 00:37:25.277592 containerd[1456]: time="2026-03-14T00:37:25.277249835Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 576.232205ms" Mar 14 00:37:25.280722 containerd[1456]: time="2026-03-14T00:37:25.279654774Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 546.681805ms" Mar 14 00:37:25.362052 kubelet[2226]: E0314 00:37:25.361947 2226 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 14 00:37:25.674365 kubelet[2226]: E0314 00:37:25.668845 2226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="1.6s" Mar 14 00:37:25.675571 kubelet[2226]: E0314 00:37:25.674033 2226 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:37:25.705750 kubelet[2226]: E0314 00:37:25.705396 2226 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:37:25.865749 kubelet[2226]: I0314 00:37:25.865598 2226 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:37:25.867022 kubelet[2226]: E0314 00:37:25.866600 2226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Mar 14 00:37:26.010920 containerd[1456]: time="2026-03-14T00:37:26.005773474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:37:26.010920 containerd[1456]: time="2026-03-14T00:37:26.005927522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:37:26.010920 containerd[1456]: time="2026-03-14T00:37:26.005944894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:26.010920 containerd[1456]: time="2026-03-14T00:37:26.006072742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:26.022418 containerd[1456]: time="2026-03-14T00:37:26.019335530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:37:26.022418 containerd[1456]: time="2026-03-14T00:37:26.019418305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:37:26.022418 containerd[1456]: time="2026-03-14T00:37:26.019439064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:26.022418 containerd[1456]: time="2026-03-14T00:37:26.019561432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:26.031217 containerd[1456]: time="2026-03-14T00:37:26.030647134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:37:26.031217 containerd[1456]: time="2026-03-14T00:37:26.030729888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:37:26.031217 containerd[1456]: time="2026-03-14T00:37:26.030755727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:26.031217 containerd[1456]: time="2026-03-14T00:37:26.030962152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:26.229674 kubelet[2226]: E0314 00:37:26.229610 2226 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 14 00:37:26.579029 systemd[1]: run-containerd-runc-k8s.io-c5aedaef4715291663cd6db60d2316abb1b3ef0cc77f21854a1240c4e11eefce-runc.nH4zX3.mount: Deactivated successfully. Mar 14 00:37:26.591409 systemd[1]: Started cri-containerd-c5aedaef4715291663cd6db60d2316abb1b3ef0cc77f21854a1240c4e11eefce.scope - libcontainer container c5aedaef4715291663cd6db60d2316abb1b3ef0cc77f21854a1240c4e11eefce. Mar 14 00:37:26.617334 systemd[1]: Started cri-containerd-3019d092b4f7aca7815e8ee2404873daeeffe7c2ae43bf0ee366e52ede6eaa9b.scope - libcontainer container 3019d092b4f7aca7815e8ee2404873daeeffe7c2ae43bf0ee366e52ede6eaa9b. Mar 14 00:37:26.621522 systemd[1]: run-containerd-runc-k8s.io-3019d092b4f7aca7815e8ee2404873daeeffe7c2ae43bf0ee366e52ede6eaa9b-runc.cRGosr.mount: Deactivated successfully. Mar 14 00:37:26.661972 systemd[1]: Started cri-containerd-e20818d724afc16b477c647bf109f9687f7ea582061a655285a20af94c69ba5e.scope - libcontainer container e20818d724afc16b477c647bf109f9687f7ea582061a655285a20af94c69ba5e. Mar 14 00:37:26.987093 containerd[1456]: time="2026-03-14T00:37:26.986056321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"3019d092b4f7aca7815e8ee2404873daeeffe7c2ae43bf0ee366e52ede6eaa9b\"" Mar 14 00:37:26.990712 kubelet[2226]: E0314 00:37:26.990305 2226 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:27.002759 containerd[1456]: time="2026-03-14T00:37:27.001934204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"e20818d724afc16b477c647bf109f9687f7ea582061a655285a20af94c69ba5e\"" Mar 14 00:37:27.008331 kubelet[2226]: E0314 00:37:27.004807 2226 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:27.010695 containerd[1456]: time="2026-03-14T00:37:27.010602356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b75be6b571daaafbc1922f5c1b6f440e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5aedaef4715291663cd6db60d2316abb1b3ef0cc77f21854a1240c4e11eefce\"" Mar 14 00:37:27.012397 kubelet[2226]: E0314 00:37:27.012317 2226 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:27.044034 containerd[1456]: time="2026-03-14T00:37:27.042818810Z" level=info msg="CreateContainer within sandbox \"3019d092b4f7aca7815e8ee2404873daeeffe7c2ae43bf0ee366e52ede6eaa9b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 14 00:37:27.055731 containerd[1456]: time="2026-03-14T00:37:27.055604520Z" level=info msg="CreateContainer within sandbox \"e20818d724afc16b477c647bf109f9687f7ea582061a655285a20af94c69ba5e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 14 00:37:27.061718 containerd[1456]: time="2026-03-14T00:37:27.061349540Z" level=info msg="CreateContainer within sandbox \"c5aedaef4715291663cd6db60d2316abb1b3ef0cc77f21854a1240c4e11eefce\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 14 00:37:27.100198 containerd[1456]: time="2026-03-14T00:37:27.099361112Z" level=info msg="CreateContainer within sandbox \"3019d092b4f7aca7815e8ee2404873daeeffe7c2ae43bf0ee366e52ede6eaa9b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bf91fbc91c7a7c8eea6015dc41801a5ef7657b81475fb29a619fe14bf616f3cc\"" Mar 14 00:37:27.101654 containerd[1456]: time="2026-03-14T00:37:27.101575243Z" level=info msg="StartContainer for \"bf91fbc91c7a7c8eea6015dc41801a5ef7657b81475fb29a619fe14bf616f3cc\"" Mar 14 00:37:27.113075 containerd[1456]: time="2026-03-14T00:37:27.112934324Z" level=info msg="CreateContainer within sandbox \"e20818d724afc16b477c647bf109f9687f7ea582061a655285a20af94c69ba5e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1e74ef7ea4ed6512f6623b1c9ba1cc88f6d8fd46b1ab8e0b0647d8b6d0c37480\"" Mar 14 00:37:27.114375 containerd[1456]: time="2026-03-14T00:37:27.114261319Z" level=info msg="StartContainer for \"1e74ef7ea4ed6512f6623b1c9ba1cc88f6d8fd46b1ab8e0b0647d8b6d0c37480\"" Mar 14 00:37:27.129994 containerd[1456]: time="2026-03-14T00:37:27.129696803Z" level=info msg="CreateContainer within sandbox \"c5aedaef4715291663cd6db60d2316abb1b3ef0cc77f21854a1240c4e11eefce\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"87a320dc6b919f9e4c2fc5d9f00e18e3372dcfdd6f94686b9dd107eb57ae0246\"" Mar 14 00:37:27.132021 containerd[1456]: time="2026-03-14T00:37:27.131760389Z" level=info msg="StartContainer for \"87a320dc6b919f9e4c2fc5d9f00e18e3372dcfdd6f94686b9dd107eb57ae0246\"" Mar 14 00:37:27.191757 systemd[1]: Started cri-containerd-1e74ef7ea4ed6512f6623b1c9ba1cc88f6d8fd46b1ab8e0b0647d8b6d0c37480.scope - libcontainer container 1e74ef7ea4ed6512f6623b1c9ba1cc88f6d8fd46b1ab8e0b0647d8b6d0c37480. Mar 14 00:37:27.276609 systemd[1]: Started cri-containerd-bf91fbc91c7a7c8eea6015dc41801a5ef7657b81475fb29a619fe14bf616f3cc.scope - libcontainer container bf91fbc91c7a7c8eea6015dc41801a5ef7657b81475fb29a619fe14bf616f3cc. Mar 14 00:37:27.279754 kubelet[2226]: E0314 00:37:27.279543 2226 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="3.2s" Mar 14 00:37:27.368555 systemd[1]: Started cri-containerd-87a320dc6b919f9e4c2fc5d9f00e18e3372dcfdd6f94686b9dd107eb57ae0246.scope - libcontainer container 87a320dc6b919f9e4c2fc5d9f00e18e3372dcfdd6f94686b9dd107eb57ae0246. Mar 14 00:37:27.470678 containerd[1456]: time="2026-03-14T00:37:27.470567243Z" level=info msg="StartContainer for \"bf91fbc91c7a7c8eea6015dc41801a5ef7657b81475fb29a619fe14bf616f3cc\" returns successfully" Mar 14 00:37:27.470837 kubelet[2226]: I0314 00:37:27.470729 2226 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:37:27.472424 kubelet[2226]: E0314 00:37:27.471325 2226 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Mar 14 00:37:27.509091 containerd[1456]: time="2026-03-14T00:37:27.508956112Z" level=info msg="StartContainer for \"87a320dc6b919f9e4c2fc5d9f00e18e3372dcfdd6f94686b9dd107eb57ae0246\" returns successfully" Mar 14 00:37:27.525202 containerd[1456]: time="2026-03-14T00:37:27.524091652Z" level=info msg="StartContainer for \"1e74ef7ea4ed6512f6623b1c9ba1cc88f6d8fd46b1ab8e0b0647d8b6d0c37480\" returns successfully" Mar 14 00:37:27.528774 kubelet[2226]: E0314 00:37:27.528682 2226 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 14 00:37:27.547818 kubelet[2226]: E0314 00:37:27.547627 2226 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:27.548780 kubelet[2226]: E0314 00:37:27.548384 2226 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:27.560711 kubelet[2226]: E0314 00:37:27.560631 2226 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:27.560840 kubelet[2226]: E0314 00:37:27.560828 2226 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:27.650452 kubelet[2226]: E0314 00:37:27.650045 2226 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.131:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 14 00:37:28.569622 kubelet[2226]: E0314 00:37:28.569454 2226 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:28.570860 kubelet[2226]: E0314 00:37:28.569766 2226 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:28.570860 kubelet[2226]: E0314 00:37:28.570596 2226 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:28.570860 kubelet[2226]: E0314 00:37:28.570714 2226 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:29.579476 kubelet[2226]: E0314 00:37:29.576039 2226 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:29.579476 kubelet[2226]: E0314 00:37:29.576478 2226 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:29.822364 kubelet[2226]: E0314 00:37:29.822000 2226 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:29.823072 kubelet[2226]: E0314 00:37:29.822657 2226 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:30.601002 kubelet[2226]: E0314 00:37:30.600710 2226 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 14 00:37:30.602703 kubelet[2226]: E0314 00:37:30.601231 2226 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:30.675721 kubelet[2226]: I0314 00:37:30.675245 2226 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:37:33.260863 kubelet[2226]: E0314 00:37:33.260509 2226 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 14 00:37:33.370243 kubelet[2226]: E0314 00:37:33.369592 2226 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189c8e254a06fd11 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-14 00:37:24.084038929 +0000 UTC m=+0.566722929,LastTimestamp:2026-03-14 00:37:24.084038929 +0000 UTC m=+0.566722929,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 14 00:37:33.463190 kubelet[2226]: I0314 00:37:33.462759 2226 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 14 00:37:33.463190 kubelet[2226]: E0314 00:37:33.463023 2226 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 14 00:37:33.480207 kubelet[2226]: E0314 00:37:33.478307 2226 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.189c8e254bc10091 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-14 00:37:24.113006737 +0000 UTC m=+0.595690667,LastTimestamp:2026-03-14 00:37:24.113006737 +0000 UTC m=+0.595690667,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 14 00:37:33.497245 kubelet[2226]: I0314 00:37:33.496994 2226 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:33.531273 kubelet[2226]: E0314 00:37:33.530717 2226 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:33.531273 kubelet[2226]: I0314 00:37:33.530770 2226 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:33.557262 kubelet[2226]: E0314 00:37:33.555248 2226 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:33.557262 kubelet[2226]: I0314 00:37:33.555674 2226 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:37:33.559527 kubelet[2226]: E0314 00:37:33.559496 2226 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Mar 14 00:37:34.066347 kubelet[2226]: I0314 00:37:34.065733 2226 apiserver.go:52] "Watching apiserver" Mar 14 00:37:34.096965 kubelet[2226]: I0314 00:37:34.096428 2226 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:37:34.405066 kubelet[2226]: I0314 00:37:34.401700 2226 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:34.462453 kubelet[2226]: E0314 00:37:34.462398 2226 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:34.666872 kubelet[2226]: E0314 00:37:34.664594 2226 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:38.902368 kubelet[2226]: I0314 00:37:38.901661 2226 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:37:38.969313 kubelet[2226]: E0314 00:37:38.968804 2226 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:39.998022 kubelet[2226]: I0314 00:37:39.996603 2226 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:40.015985 kubelet[2226]: E0314 00:37:40.006563 2226 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:40.505163 kubelet[2226]: I0314 00:37:40.502664 2226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=6.5026412019999995 podStartE2EDuration="6.502641202s" podCreationTimestamp="2026-03-14 00:37:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:37:39.207577461 +0000 UTC m=+15.690261411" watchObservedRunningTime="2026-03-14 00:37:40.502641202 +0000 UTC m=+16.985325142" Mar 14 00:37:40.505163 kubelet[2226]: E0314 00:37:40.502858 2226 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:40.700391 kubelet[2226]: I0314 00:37:40.699476 2226 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.6994522180000002 podStartE2EDuration="2.699452218s" podCreationTimestamp="2026-03-14 00:37:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:37:40.524879751 +0000 UTC m=+17.007563691" watchObservedRunningTime="2026-03-14 00:37:40.699452218 +0000 UTC m=+17.182136139" Mar 14 00:37:40.797373 systemd[1]: Reloading requested from client PID 2520 ('systemctl') (unit session-9.scope)... Mar 14 00:37:40.797430 systemd[1]: Reloading... Mar 14 00:37:41.200231 kubelet[2226]: E0314 00:37:41.200159 2226 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:41.280294 zram_generator::config[2559]: No configuration found. Mar 14 00:37:41.708613 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 14 00:37:41.890407 systemd[1]: Reloading finished in 1092 ms. Mar 14 00:37:42.092498 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:37:42.111512 systemd[1]: kubelet.service: Deactivated successfully. Mar 14 00:37:42.111968 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:37:42.112096 systemd[1]: kubelet.service: Consumed 4.566s CPU time, 131.1M memory peak, 0B memory swap peak. Mar 14 00:37:42.131281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 14 00:37:42.445514 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 14 00:37:42.461823 (kubelet)[2604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 14 00:37:42.709884 sudo[2610]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 14 00:37:42.710624 sudo[2610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 14 00:37:42.901850 kubelet[2604]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 14 00:37:42.901850 kubelet[2604]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 14 00:37:42.901850 kubelet[2604]: I0314 00:37:42.901666 2604 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 14 00:37:42.913414 kubelet[2604]: I0314 00:37:42.913356 2604 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 14 00:37:42.913414 kubelet[2604]: I0314 00:37:42.913381 2604 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 14 00:37:42.913414 kubelet[2604]: I0314 00:37:42.913409 2604 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 14 00:37:42.913414 kubelet[2604]: I0314 00:37:42.913416 2604 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 14 00:37:42.913624 kubelet[2604]: I0314 00:37:42.913573 2604 server.go:956] "Client rotation is on, will bootstrap in background" Mar 14 00:37:42.915320 kubelet[2604]: I0314 00:37:42.915230 2604 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 14 00:37:42.924630 kubelet[2604]: I0314 00:37:42.923462 2604 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 14 00:37:42.937664 kubelet[2604]: E0314 00:37:42.937616 2604 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 14 00:37:42.938335 kubelet[2604]: I0314 00:37:42.938076 2604 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 14 00:37:42.997018 kubelet[2604]: I0314 00:37:42.995812 2604 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 14 00:37:43.008814 kubelet[2604]: I0314 00:37:43.005648 2604 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 14 00:37:43.008814 kubelet[2604]: I0314 00:37:43.005734 2604 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 14 00:37:43.008814 kubelet[2604]: I0314 00:37:43.006021 2604 topology_manager.go:138] "Creating topology manager with none policy" Mar 14 00:37:43.008814 kubelet[2604]: I0314 00:37:43.006045 2604 container_manager_linux.go:306] "Creating device plugin manager" Mar 14 00:37:43.010489 kubelet[2604]: I0314 00:37:43.008975 2604 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 14 00:37:43.017760 kubelet[2604]: I0314 00:37:43.016648 2604 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:37:43.017760 kubelet[2604]: I0314 00:37:43.017718 2604 kubelet.go:475] "Attempting to sync node with API server" Mar 14 00:37:43.035628 kubelet[2604]: I0314 00:37:43.035249 2604 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 14 00:37:43.035628 kubelet[2604]: I0314 00:37:43.035462 2604 kubelet.go:387] "Adding apiserver pod source" Mar 14 00:37:43.035628 kubelet[2604]: I0314 00:37:43.035484 2604 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 14 00:37:43.097189 kubelet[2604]: I0314 00:37:43.096359 2604 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 14 00:37:43.107577 kubelet[2604]: I0314 00:37:43.106212 2604 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 14 00:37:43.107577 kubelet[2604]: I0314 00:37:43.106285 2604 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 14 00:37:43.173413 kubelet[2604]: I0314 00:37:43.169833 2604 server.go:1262] "Started kubelet" Mar 14 00:37:43.173413 kubelet[2604]: I0314 00:37:43.172867 2604 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 14 00:37:43.173413 kubelet[2604]: I0314 00:37:43.173256 2604 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 14 00:37:43.237545 kubelet[2604]: I0314 00:37:43.236748 2604 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 14 00:37:43.279985 kubelet[2604]: I0314 00:37:43.276589 2604 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 14 00:37:43.384898 kubelet[2604]: E0314 00:37:43.383512 2604 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 14 00:37:43.388223 kubelet[2604]: I0314 00:37:43.386037 2604 server.go:310] "Adding debug handlers to kubelet server" Mar 14 00:37:43.388223 kubelet[2604]: I0314 00:37:43.387519 2604 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 14 00:37:43.388641 kubelet[2604]: I0314 00:37:43.388612 2604 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 14 00:37:43.394553 kubelet[2604]: I0314 00:37:43.394528 2604 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 14 00:37:43.417520 kubelet[2604]: I0314 00:37:43.411862 2604 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 14 00:37:43.417520 kubelet[2604]: I0314 00:37:43.413195 2604 reconciler.go:29] "Reconciler: start to sync state" Mar 14 00:37:43.417520 kubelet[2604]: I0314 00:37:43.414230 2604 factory.go:223] Registration of the systemd container factory successfully Mar 14 00:37:43.417520 kubelet[2604]: I0314 00:37:43.416758 2604 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 14 00:37:43.422088 kubelet[2604]: I0314 00:37:43.421191 2604 factory.go:223] Registration of the containerd container factory successfully Mar 14 00:37:43.575489 kubelet[2604]: I0314 00:37:43.573482 2604 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 14 00:37:43.597016 kubelet[2604]: I0314 00:37:43.595879 2604 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 14 00:37:43.597016 kubelet[2604]: I0314 00:37:43.596291 2604 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 14 00:37:43.597016 kubelet[2604]: I0314 00:37:43.596336 2604 kubelet.go:2428] "Starting kubelet main sync loop" Mar 14 00:37:43.608917 kubelet[2604]: E0314 00:37:43.600550 2604 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 14 00:37:43.702336 kubelet[2604]: E0314 00:37:43.701329 2604 kubelet.go:2452] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 14 00:37:43.744834 kubelet[2604]: I0314 00:37:43.743758 2604 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 14 00:37:43.744834 kubelet[2604]: I0314 00:37:43.743790 2604 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 14 00:37:43.744834 kubelet[2604]: I0314 00:37:43.743824 2604 state_mem.go:36] "Initialized new in-memory state store" Mar 14 00:37:43.746831 kubelet[2604]: I0314 00:37:43.746799 2604 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 14 00:37:43.748445 kubelet[2604]: I0314 00:37:43.747073 2604 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 14 00:37:43.748445 kubelet[2604]: I0314 00:37:43.747260 2604 policy_none.go:49] "None policy: Start" Mar 14 00:37:43.748445 kubelet[2604]: I0314 00:37:43.747277 2604 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 14 00:37:43.748445 kubelet[2604]: I0314 00:37:43.747296 2604 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 14 00:37:43.748445 kubelet[2604]: I0314 00:37:43.747459 2604 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 14 00:37:43.748445 kubelet[2604]: I0314 00:37:43.747472 2604 policy_none.go:47] "Start" Mar 14 00:37:43.783890 kubelet[2604]: E0314 00:37:43.783768 2604 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 14 00:37:43.784365 kubelet[2604]: I0314 00:37:43.784204 2604 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 14 00:37:43.784365 kubelet[2604]: I0314 00:37:43.784248 2604 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 14 00:37:43.786642 kubelet[2604]: I0314 00:37:43.786583 2604 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 14 00:37:43.794441 kubelet[2604]: E0314 00:37:43.794255 2604 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 14 00:37:43.906373 kubelet[2604]: I0314 00:37:43.905496 2604 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:43.906373 kubelet[2604]: I0314 00:37:43.906021 2604 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:43.907744 kubelet[2604]: I0314 00:37:43.907414 2604 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 14 00:37:43.925370 kubelet[2604]: I0314 00:37:43.925315 2604 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 14 00:37:43.934021 kubelet[2604]: E0314 00:37:43.932781 2604 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 14 00:37:43.938255 kubelet[2604]: I0314 00:37:43.938038 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b75be6b571daaafbc1922f5c1b6f440e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b75be6b571daaafbc1922f5c1b6f440e\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:43.938255 kubelet[2604]: I0314 00:37:43.938231 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:43.938550 kubelet[2604]: I0314 00:37:43.938270 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:43.938550 kubelet[2604]: I0314 00:37:43.938296 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b75be6b571daaafbc1922f5c1b6f440e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b75be6b571daaafbc1922f5c1b6f440e\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:43.938550 kubelet[2604]: I0314 00:37:43.938320 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b75be6b571daaafbc1922f5c1b6f440e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b75be6b571daaafbc1922f5c1b6f440e\") " pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:43.938774 kubelet[2604]: I0314 00:37:43.938455 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:43.938774 kubelet[2604]: I0314 00:37:43.938618 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:43.938774 kubelet[2604]: I0314 00:37:43.938649 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:43.938774 kubelet[2604]: I0314 00:37:43.938670 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 14 00:37:43.945901 kubelet[2604]: E0314 00:37:43.941318 2604 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 14 00:37:43.945901 kubelet[2604]: E0314 00:37:43.941879 2604 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 14 00:37:43.971884 kubelet[2604]: I0314 00:37:43.971220 2604 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 14 00:37:43.971884 kubelet[2604]: I0314 00:37:43.971458 2604 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 14 00:37:44.043370 kubelet[2604]: I0314 00:37:44.042456 2604 apiserver.go:52] "Watching apiserver" Mar 14 00:37:44.117182 kubelet[2604]: I0314 00:37:44.115679 2604 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 14 00:37:44.236229 kubelet[2604]: E0314 00:37:44.235841 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:44.246088 kubelet[2604]: E0314 00:37:44.245884 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:44.249234 kubelet[2604]: E0314 00:37:44.247254 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:44.304418 sudo[2610]: pam_unix(sudo:session): session closed for user root Mar 14 00:37:44.692845 kubelet[2604]: E0314 00:37:44.691893 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:44.693837 kubelet[2604]: E0314 00:37:44.693798 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:44.694266 kubelet[2604]: E0314 00:37:44.694032 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:45.544666 kubelet[2604]: I0314 00:37:45.544553 2604 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 14 00:37:45.545565 containerd[1456]: time="2026-03-14T00:37:45.545418287Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 14 00:37:45.546048 kubelet[2604]: I0314 00:37:45.545647 2604 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 14 00:37:45.696375 kubelet[2604]: E0314 00:37:45.694798 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:45.696874 kubelet[2604]: E0314 00:37:45.695835 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:50.296521 kubelet[2604]: E0314 00:37:50.293669 2604 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.657s" Mar 14 00:37:50.721687 kubelet[2604]: E0314 00:37:50.716766 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:51.610921 kubelet[2604]: E0314 00:37:51.610535 2604 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.317s" Mar 14 00:37:51.728509 kubelet[2604]: E0314 00:37:51.728381 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:52.130906 systemd[1]: Created slice kubepods-besteffort-pod21535549_b343_4f7c_8516_f113607c3178.slice - libcontainer container kubepods-besteffort-pod21535549_b343_4f7c_8516_f113607c3178.slice. Mar 14 00:37:52.193603 kubelet[2604]: I0314 00:37:52.192576 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnggl\" (UniqueName: \"kubernetes.io/projected/21535549-b343-4f7c-8516-f113607c3178-kube-api-access-hnggl\") pod \"cilium-operator-6f9c7c5859-qtrq5\" (UID: \"21535549-b343-4f7c-8516-f113607c3178\") " pod="kube-system/cilium-operator-6f9c7c5859-qtrq5" Mar 14 00:37:52.193603 kubelet[2604]: I0314 00:37:52.192636 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21535549-b343-4f7c-8516-f113607c3178-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-qtrq5\" (UID: \"21535549-b343-4f7c-8516-f113607c3178\") " pod="kube-system/cilium-operator-6f9c7c5859-qtrq5" Mar 14 00:37:52.264866 systemd[1]: Created slice kubepods-besteffort-pod9f3c67df_62a8_41ed_9d3d_dcb6ee2cf376.slice - libcontainer container kubepods-besteffort-pod9f3c67df_62a8_41ed_9d3d_dcb6ee2cf376.slice. Mar 14 00:37:52.293650 kubelet[2604]: I0314 00:37:52.292754 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-etc-cni-netd\") pod \"cilium-c6zqv\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " pod="kube-system/cilium-c6zqv" Mar 14 00:37:52.293650 kubelet[2604]: I0314 00:37:52.292921 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9f3c67df-62a8-41ed-9d3d-dcb6ee2cf376-kube-proxy\") pod \"kube-proxy-mcxjv\" (UID: \"9f3c67df-62a8-41ed-9d3d-dcb6ee2cf376\") " pod="kube-system/kube-proxy-mcxjv" Mar 14 00:37:52.293650 kubelet[2604]: I0314 00:37:52.292995 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-lib-modules\") pod \"cilium-c6zqv\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " pod="kube-system/cilium-c6zqv" Mar 14 00:37:52.293650 kubelet[2604]: I0314 00:37:52.293070 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-xtables-lock\") pod \"cilium-c6zqv\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " pod="kube-system/cilium-c6zqv" Mar 14 00:37:52.293650 kubelet[2604]: I0314 00:37:52.293094 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9f3c67df-62a8-41ed-9d3d-dcb6ee2cf376-lib-modules\") pod \"kube-proxy-mcxjv\" (UID: \"9f3c67df-62a8-41ed-9d3d-dcb6ee2cf376\") " pod="kube-system/kube-proxy-mcxjv" Mar 14 00:37:52.293650 kubelet[2604]: I0314 00:37:52.293228 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-host-proc-sys-net\") pod \"cilium-c6zqv\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " pod="kube-system/cilium-c6zqv" Mar 14 00:37:52.294256 kubelet[2604]: I0314 00:37:52.293267 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t8cw\" (UniqueName: \"kubernetes.io/projected/9f3c67df-62a8-41ed-9d3d-dcb6ee2cf376-kube-api-access-4t8cw\") pod \"kube-proxy-mcxjv\" (UID: \"9f3c67df-62a8-41ed-9d3d-dcb6ee2cf376\") " pod="kube-system/kube-proxy-mcxjv" Mar 14 00:37:52.294256 kubelet[2604]: I0314 00:37:52.293295 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-cilium-run\") pod \"cilium-c6zqv\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " pod="kube-system/cilium-c6zqv" Mar 14 00:37:52.294256 kubelet[2604]: I0314 00:37:52.293393 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-bpf-maps\") pod \"cilium-c6zqv\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " pod="kube-system/cilium-c6zqv" Mar 14 00:37:52.294256 kubelet[2604]: I0314 00:37:52.293416 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-cilium-cgroup\") pod \"cilium-c6zqv\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " pod="kube-system/cilium-c6zqv" Mar 14 00:37:52.294256 kubelet[2604]: I0314 00:37:52.293443 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-cni-path\") pod \"cilium-c6zqv\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " pod="kube-system/cilium-c6zqv" Mar 14 00:37:52.294256 kubelet[2604]: I0314 00:37:52.293466 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15986774-2e6c-4bd6-ae16-0d84ff0809f4-hubble-tls\") pod \"cilium-c6zqv\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " pod="kube-system/cilium-c6zqv" Mar 14 00:37:52.295472 kubelet[2604]: I0314 00:37:52.293490 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgrgt\" (UniqueName: \"kubernetes.io/projected/15986774-2e6c-4bd6-ae16-0d84ff0809f4-kube-api-access-hgrgt\") pod \"cilium-c6zqv\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " pod="kube-system/cilium-c6zqv" Mar 14 00:37:52.295472 kubelet[2604]: I0314 00:37:52.293517 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-hostproc\") pod \"cilium-c6zqv\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " pod="kube-system/cilium-c6zqv" Mar 14 00:37:52.295472 kubelet[2604]: I0314 00:37:52.293588 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15986774-2e6c-4bd6-ae16-0d84ff0809f4-clustermesh-secrets\") pod \"cilium-c6zqv\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " pod="kube-system/cilium-c6zqv" Mar 14 00:37:52.295472 kubelet[2604]: I0314 00:37:52.293616 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15986774-2e6c-4bd6-ae16-0d84ff0809f4-cilium-config-path\") pod \"cilium-c6zqv\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " pod="kube-system/cilium-c6zqv" Mar 14 00:37:52.295472 kubelet[2604]: I0314 00:37:52.293638 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-host-proc-sys-kernel\") pod \"cilium-c6zqv\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " pod="kube-system/cilium-c6zqv" Mar 14 00:37:52.295721 kubelet[2604]: I0314 00:37:52.293661 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9f3c67df-62a8-41ed-9d3d-dcb6ee2cf376-xtables-lock\") pod \"kube-proxy-mcxjv\" (UID: \"9f3c67df-62a8-41ed-9d3d-dcb6ee2cf376\") " pod="kube-system/kube-proxy-mcxjv" Mar 14 00:37:52.295814 systemd[1]: Created slice kubepods-burstable-pod15986774_2e6c_4bd6_ae16_0d84ff0809f4.slice - libcontainer container kubepods-burstable-pod15986774_2e6c_4bd6_ae16_0d84ff0809f4.slice. Mar 14 00:37:52.461223 kubelet[2604]: E0314 00:37:52.460792 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:52.463585 containerd[1456]: time="2026-03-14T00:37:52.463030180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-qtrq5,Uid:21535549-b343-4f7c-8516-f113607c3178,Namespace:kube-system,Attempt:0,}" Mar 14 00:37:52.569192 kubelet[2604]: E0314 00:37:52.567499 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:52.586174 kubelet[2604]: E0314 00:37:52.584933 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:52.589081 containerd[1456]: time="2026-03-14T00:37:52.588808692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mcxjv,Uid:9f3c67df-62a8-41ed-9d3d-dcb6ee2cf376,Namespace:kube-system,Attempt:0,}" Mar 14 00:37:52.597221 containerd[1456]: time="2026-03-14T00:37:52.595869144Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:37:52.597221 containerd[1456]: time="2026-03-14T00:37:52.596391342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:37:52.597221 containerd[1456]: time="2026-03-14T00:37:52.596444967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:52.597221 containerd[1456]: time="2026-03-14T00:37:52.597000787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:52.620776 kubelet[2604]: E0314 00:37:52.620380 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:52.628332 containerd[1456]: time="2026-03-14T00:37:52.627817376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c6zqv,Uid:15986774-2e6c-4bd6-ae16-0d84ff0809f4,Namespace:kube-system,Attempt:0,}" Mar 14 00:37:52.709981 containerd[1456]: time="2026-03-14T00:37:52.700419595Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:37:52.709981 containerd[1456]: time="2026-03-14T00:37:52.700507402Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:37:52.709981 containerd[1456]: time="2026-03-14T00:37:52.700585582Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:52.709981 containerd[1456]: time="2026-03-14T00:37:52.700700327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:52.712301 systemd[1]: Started cri-containerd-fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77.scope - libcontainer container fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77. Mar 14 00:37:52.741923 kubelet[2604]: E0314 00:37:52.740030 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:52.751373 systemd[1]: Started cri-containerd-a82bfd8ca662d7b60b4f31084c75b1ab03eae80109e0bda9d38f3a27e533ddf8.scope - libcontainer container a82bfd8ca662d7b60b4f31084c75b1ab03eae80109e0bda9d38f3a27e533ddf8. Mar 14 00:37:52.764996 containerd[1456]: time="2026-03-14T00:37:52.762904760Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:37:52.764996 containerd[1456]: time="2026-03-14T00:37:52.763069876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:37:52.764996 containerd[1456]: time="2026-03-14T00:37:52.763095072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:52.764996 containerd[1456]: time="2026-03-14T00:37:52.763337659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:37:52.830830 kubelet[2604]: E0314 00:37:52.830631 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:52.833463 systemd[1]: Started cri-containerd-a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9.scope - libcontainer container a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9. Mar 14 00:37:52.919427 containerd[1456]: time="2026-03-14T00:37:52.918390085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-qtrq5,Uid:21535549-b343-4f7c-8516-f113607c3178,Namespace:kube-system,Attempt:0,} returns sandbox id \"fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77\"" Mar 14 00:37:52.919427 containerd[1456]: time="2026-03-14T00:37:52.918527884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mcxjv,Uid:9f3c67df-62a8-41ed-9d3d-dcb6ee2cf376,Namespace:kube-system,Attempt:0,} returns sandbox id \"a82bfd8ca662d7b60b4f31084c75b1ab03eae80109e0bda9d38f3a27e533ddf8\"" Mar 14 00:37:52.922679 kubelet[2604]: E0314 00:37:52.922595 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:52.923673 containerd[1456]: time="2026-03-14T00:37:52.923573046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c6zqv,Uid:15986774-2e6c-4bd6-ae16-0d84ff0809f4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\"" Mar 14 00:37:52.924885 kubelet[2604]: E0314 00:37:52.924241 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:52.928283 kubelet[2604]: E0314 00:37:52.926411 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:52.934590 containerd[1456]: time="2026-03-14T00:37:52.934483115Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 14 00:37:52.944741 containerd[1456]: time="2026-03-14T00:37:52.944572914Z" level=info msg="CreateContainer within sandbox \"a82bfd8ca662d7b60b4f31084c75b1ab03eae80109e0bda9d38f3a27e533ddf8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 14 00:37:52.994882 containerd[1456]: time="2026-03-14T00:37:52.993951986Z" level=info msg="CreateContainer within sandbox \"a82bfd8ca662d7b60b4f31084c75b1ab03eae80109e0bda9d38f3a27e533ddf8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"697dccfb332d5a7df852d4ce91cb120c8fb042cdb12e496038a814d41a3e50f4\"" Mar 14 00:37:52.996085 containerd[1456]: time="2026-03-14T00:37:52.995890173Z" level=info msg="StartContainer for \"697dccfb332d5a7df852d4ce91cb120c8fb042cdb12e496038a814d41a3e50f4\"" Mar 14 00:37:53.081556 systemd[1]: Started cri-containerd-697dccfb332d5a7df852d4ce91cb120c8fb042cdb12e496038a814d41a3e50f4.scope - libcontainer container 697dccfb332d5a7df852d4ce91cb120c8fb042cdb12e496038a814d41a3e50f4. Mar 14 00:37:53.173243 containerd[1456]: time="2026-03-14T00:37:53.173056629Z" level=info msg="StartContainer for \"697dccfb332d5a7df852d4ce91cb120c8fb042cdb12e496038a814d41a3e50f4\" returns successfully" Mar 14 00:37:53.795259 kubelet[2604]: E0314 00:37:53.791841 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:53.808444 kubelet[2604]: E0314 00:37:53.807887 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:37:53.849295 kubelet[2604]: I0314 00:37:53.848014 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mcxjv" podStartSLOduration=1.847899796 podStartE2EDuration="1.847899796s" podCreationTimestamp="2026-03-14 00:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:37:53.84445227 +0000 UTC m=+11.042685949" watchObservedRunningTime="2026-03-14 00:37:53.847899796 +0000 UTC m=+11.046133435" Mar 14 00:38:16.763489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1408616056.mount: Deactivated successfully. Mar 14 00:38:27.405436 containerd[1456]: time="2026-03-14T00:38:27.404072572Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:27.409319 containerd[1456]: time="2026-03-14T00:38:27.408926727Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 14 00:38:27.412770 containerd[1456]: time="2026-03-14T00:38:27.410570980Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:27.413848 containerd[1456]: time="2026-03-14T00:38:27.413703772Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 34.479179503s" Mar 14 00:38:27.413848 containerd[1456]: time="2026-03-14T00:38:27.413797240Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 14 00:38:27.416971 containerd[1456]: time="2026-03-14T00:38:27.416831291Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 14 00:38:27.431408 containerd[1456]: time="2026-03-14T00:38:27.431348121Z" level=info msg="CreateContainer within sandbox \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:38:27.595587 containerd[1456]: time="2026-03-14T00:38:27.595400342Z" level=info msg="CreateContainer within sandbox \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2\"" Mar 14 00:38:27.601618 containerd[1456]: time="2026-03-14T00:38:27.598746653Z" level=info msg="StartContainer for \"43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2\"" Mar 14 00:38:27.725623 systemd[1]: Started cri-containerd-43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2.scope - libcontainer container 43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2. Mar 14 00:38:27.864627 containerd[1456]: time="2026-03-14T00:38:27.864426576Z" level=info msg="StartContainer for \"43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2\" returns successfully" Mar 14 00:38:27.921695 systemd[1]: cri-containerd-43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2.scope: Deactivated successfully. Mar 14 00:38:28.033816 containerd[1456]: time="2026-03-14T00:38:28.030830005Z" level=info msg="shim disconnected" id=43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2 namespace=k8s.io Mar 14 00:38:28.033816 containerd[1456]: time="2026-03-14T00:38:28.031005798Z" level=warning msg="cleaning up after shim disconnected" id=43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2 namespace=k8s.io Mar 14 00:38:28.033816 containerd[1456]: time="2026-03-14T00:38:28.031165057Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:38:28.241868 kubelet[2604]: E0314 00:38:28.238628 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:28.265872 containerd[1456]: time="2026-03-14T00:38:28.264920599Z" level=info msg="CreateContainer within sandbox \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:38:28.361949 containerd[1456]: time="2026-03-14T00:38:28.361772777Z" level=info msg="CreateContainer within sandbox \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67\"" Mar 14 00:38:28.366929 containerd[1456]: time="2026-03-14T00:38:28.365878739Z" level=info msg="StartContainer for \"64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67\"" Mar 14 00:38:28.453556 systemd[1]: Started cri-containerd-64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67.scope - libcontainer container 64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67. Mar 14 00:38:28.540032 containerd[1456]: time="2026-03-14T00:38:28.536436778Z" level=info msg="StartContainer for \"64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67\" returns successfully" Mar 14 00:38:28.587567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2-rootfs.mount: Deactivated successfully. Mar 14 00:38:28.606805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1622691384.mount: Deactivated successfully. Mar 14 00:38:28.628526 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 14 00:38:28.629041 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:38:28.629288 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:38:28.656274 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 14 00:38:28.657449 systemd[1]: cri-containerd-64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67.scope: Deactivated successfully. Mar 14 00:38:28.742450 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 14 00:38:28.751079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67-rootfs.mount: Deactivated successfully. Mar 14 00:38:28.789016 containerd[1456]: time="2026-03-14T00:38:28.787414910Z" level=info msg="shim disconnected" id=64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67 namespace=k8s.io Mar 14 00:38:28.789016 containerd[1456]: time="2026-03-14T00:38:28.787487100Z" level=warning msg="cleaning up after shim disconnected" id=64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67 namespace=k8s.io Mar 14 00:38:28.789016 containerd[1456]: time="2026-03-14T00:38:28.787592265Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:38:29.310441 kubelet[2604]: E0314 00:38:29.299841 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:29.346424 containerd[1456]: time="2026-03-14T00:38:29.345708286Z" level=info msg="CreateContainer within sandbox \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:38:29.472929 containerd[1456]: time="2026-03-14T00:38:29.472336646Z" level=info msg="CreateContainer within sandbox \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070\"" Mar 14 00:38:29.481383 containerd[1456]: time="2026-03-14T00:38:29.476614512Z" level=info msg="StartContainer for \"3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070\"" Mar 14 00:38:29.601268 systemd[1]: Started cri-containerd-3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070.scope - libcontainer container 3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070. Mar 14 00:38:29.671947 containerd[1456]: time="2026-03-14T00:38:29.671722460Z" level=info msg="StartContainer for \"3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070\" returns successfully" Mar 14 00:38:29.680944 systemd[1]: cri-containerd-3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070.scope: Deactivated successfully. Mar 14 00:38:29.743563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070-rootfs.mount: Deactivated successfully. Mar 14 00:38:29.826812 containerd[1456]: time="2026-03-14T00:38:29.826191256Z" level=info msg="shim disconnected" id=3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070 namespace=k8s.io Mar 14 00:38:29.826812 containerd[1456]: time="2026-03-14T00:38:29.826261553Z" level=warning msg="cleaning up after shim disconnected" id=3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070 namespace=k8s.io Mar 14 00:38:29.826812 containerd[1456]: time="2026-03-14T00:38:29.826273079Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:38:30.337409 kubelet[2604]: E0314 00:38:30.334961 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:30.362256 containerd[1456]: time="2026-03-14T00:38:30.361811758Z" level=info msg="CreateContainer within sandbox \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:38:30.461359 containerd[1456]: time="2026-03-14T00:38:30.460799950Z" level=info msg="CreateContainer within sandbox \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a\"" Mar 14 00:38:30.462891 containerd[1456]: time="2026-03-14T00:38:30.462664523Z" level=info msg="StartContainer for \"dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a\"" Mar 14 00:38:30.485790 containerd[1456]: time="2026-03-14T00:38:30.484486770Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:30.487881 containerd[1456]: time="2026-03-14T00:38:30.487835535Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 14 00:38:30.492213 containerd[1456]: time="2026-03-14T00:38:30.491953340Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 14 00:38:30.495225 containerd[1456]: time="2026-03-14T00:38:30.495193996Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.078176527s" Mar 14 00:38:30.495339 containerd[1456]: time="2026-03-14T00:38:30.495320482Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 14 00:38:30.509007 containerd[1456]: time="2026-03-14T00:38:30.508847924Z" level=info msg="CreateContainer within sandbox \"fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 14 00:38:30.553230 containerd[1456]: time="2026-03-14T00:38:30.552981944Z" level=info msg="CreateContainer within sandbox \"fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43\"" Mar 14 00:38:30.555242 systemd[1]: Started cri-containerd-dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a.scope - libcontainer container dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a. Mar 14 00:38:30.558216 containerd[1456]: time="2026-03-14T00:38:30.557255544Z" level=info msg="StartContainer for \"8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43\"" Mar 14 00:38:30.676407 systemd[1]: Started cri-containerd-8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43.scope - libcontainer container 8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43. Mar 14 00:38:30.688921 systemd[1]: cri-containerd-dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a.scope: Deactivated successfully. Mar 14 00:38:30.702744 containerd[1456]: time="2026-03-14T00:38:30.702327118Z" level=info msg="StartContainer for \"dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a\" returns successfully" Mar 14 00:38:30.749403 containerd[1456]: time="2026-03-14T00:38:30.749278175Z" level=info msg="StartContainer for \"8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43\" returns successfully" Mar 14 00:38:30.768384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a-rootfs.mount: Deactivated successfully. Mar 14 00:38:30.843727 containerd[1456]: time="2026-03-14T00:38:30.842960417Z" level=info msg="shim disconnected" id=dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a namespace=k8s.io Mar 14 00:38:30.843727 containerd[1456]: time="2026-03-14T00:38:30.843029864Z" level=warning msg="cleaning up after shim disconnected" id=dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a namespace=k8s.io Mar 14 00:38:30.843727 containerd[1456]: time="2026-03-14T00:38:30.843042943Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:38:31.437711 kubelet[2604]: E0314 00:38:31.435897 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:31.469848 kubelet[2604]: E0314 00:38:31.467035 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:31.471079 containerd[1456]: time="2026-03-14T00:38:31.467531895Z" level=info msg="CreateContainer within sandbox \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:38:31.638738 containerd[1456]: time="2026-03-14T00:38:31.638598727Z" level=info msg="CreateContainer within sandbox \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63\"" Mar 14 00:38:31.644847 containerd[1456]: time="2026-03-14T00:38:31.644747410Z" level=info msg="StartContainer for \"a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63\"" Mar 14 00:38:31.799640 kubelet[2604]: I0314 00:38:31.798420 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-qtrq5" podStartSLOduration=3.234426079 podStartE2EDuration="40.79809604s" podCreationTimestamp="2026-03-14 00:37:51 +0000 UTC" firstStartedPulling="2026-03-14 00:37:52.933911506 +0000 UTC m=+10.132145145" lastFinishedPulling="2026-03-14 00:38:30.497581466 +0000 UTC m=+47.695815106" observedRunningTime="2026-03-14 00:38:31.796942441 +0000 UTC m=+48.995176080" watchObservedRunningTime="2026-03-14 00:38:31.79809604 +0000 UTC m=+48.996329679" Mar 14 00:38:31.938990 systemd[1]: run-containerd-runc-k8s.io-a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63-runc.EzKqFc.mount: Deactivated successfully. Mar 14 00:38:31.959746 systemd[1]: Started cri-containerd-a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63.scope - libcontainer container a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63. Mar 14 00:38:32.137929 containerd[1456]: time="2026-03-14T00:38:32.137782839Z" level=info msg="StartContainer for \"a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63\" returns successfully" Mar 14 00:38:32.496984 kubelet[2604]: E0314 00:38:32.495041 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:32.599233 kubelet[2604]: I0314 00:38:32.597404 2604 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 14 00:38:32.939958 systemd[1]: Created slice kubepods-burstable-podb143a742_8073_49fb_a83b_031ac1f9309c.slice - libcontainer container kubepods-burstable-podb143a742_8073_49fb_a83b_031ac1f9309c.slice. Mar 14 00:38:32.955367 systemd[1]: Created slice kubepods-burstable-pode69dadb0_a1dd_42ff_9fe0_0e43938ff534.slice - libcontainer container kubepods-burstable-pode69dadb0_a1dd_42ff_9fe0_0e43938ff534.slice. Mar 14 00:38:33.023987 kubelet[2604]: I0314 00:38:33.023096 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b143a742-8073-49fb-a83b-031ac1f9309c-config-volume\") pod \"coredns-66bc5c9577-9gvgw\" (UID: \"b143a742-8073-49fb-a83b-031ac1f9309c\") " pod="kube-system/coredns-66bc5c9577-9gvgw" Mar 14 00:38:33.023987 kubelet[2604]: I0314 00:38:33.023252 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz524\" (UniqueName: \"kubernetes.io/projected/e69dadb0-a1dd-42ff-9fe0-0e43938ff534-kube-api-access-sz524\") pod \"coredns-66bc5c9577-kxjqn\" (UID: \"e69dadb0-a1dd-42ff-9fe0-0e43938ff534\") " pod="kube-system/coredns-66bc5c9577-kxjqn" Mar 14 00:38:33.029096 kubelet[2604]: I0314 00:38:33.023286 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e69dadb0-a1dd-42ff-9fe0-0e43938ff534-config-volume\") pod \"coredns-66bc5c9577-kxjqn\" (UID: \"e69dadb0-a1dd-42ff-9fe0-0e43938ff534\") " pod="kube-system/coredns-66bc5c9577-kxjqn" Mar 14 00:38:33.029096 kubelet[2604]: I0314 00:38:33.024343 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hzlm\" (UniqueName: \"kubernetes.io/projected/b143a742-8073-49fb-a83b-031ac1f9309c-kube-api-access-4hzlm\") pod \"coredns-66bc5c9577-9gvgw\" (UID: \"b143a742-8073-49fb-a83b-031ac1f9309c\") " pod="kube-system/coredns-66bc5c9577-9gvgw" Mar 14 00:38:33.264836 kubelet[2604]: E0314 00:38:33.259855 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:33.266929 containerd[1456]: time="2026-03-14T00:38:33.266686972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9gvgw,Uid:b143a742-8073-49fb-a83b-031ac1f9309c,Namespace:kube-system,Attempt:0,}" Mar 14 00:38:33.286739 kubelet[2604]: E0314 00:38:33.286302 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:33.288686 containerd[1456]: time="2026-03-14T00:38:33.288514013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kxjqn,Uid:e69dadb0-a1dd-42ff-9fe0-0e43938ff534,Namespace:kube-system,Attempt:0,}" Mar 14 00:38:33.495588 kubelet[2604]: E0314 00:38:33.494705 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:33.604522 kubelet[2604]: I0314 00:38:33.602984 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c6zqv" podStartSLOduration=7.120060544 podStartE2EDuration="41.602965425s" podCreationTimestamp="2026-03-14 00:37:52 +0000 UTC" firstStartedPulling="2026-03-14 00:37:52.932750182 +0000 UTC m=+10.130983821" lastFinishedPulling="2026-03-14 00:38:27.415655063 +0000 UTC m=+44.613888702" observedRunningTime="2026-03-14 00:38:33.593421452 +0000 UTC m=+50.791655111" watchObservedRunningTime="2026-03-14 00:38:33.602965425 +0000 UTC m=+50.801199064" Mar 14 00:38:34.622567 kubelet[2604]: E0314 00:38:34.617758 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:36.160026 systemd-networkd[1385]: cilium_host: Link UP Mar 14 00:38:36.163500 systemd-networkd[1385]: cilium_net: Link UP Mar 14 00:38:36.163879 systemd-networkd[1385]: cilium_net: Gained carrier Mar 14 00:38:36.164335 systemd-networkd[1385]: cilium_host: Gained carrier Mar 14 00:38:36.562043 systemd-networkd[1385]: cilium_vxlan: Link UP Mar 14 00:38:36.562056 systemd-networkd[1385]: cilium_vxlan: Gained carrier Mar 14 00:38:36.783578 systemd-networkd[1385]: cilium_net: Gained IPv6LL Mar 14 00:38:36.997452 kernel: NET: Registered PF_ALG protocol family Mar 14 00:38:37.168992 systemd-networkd[1385]: cilium_host: Gained IPv6LL Mar 14 00:38:37.872406 systemd-networkd[1385]: cilium_vxlan: Gained IPv6LL Mar 14 00:38:39.313948 systemd-networkd[1385]: lxc_health: Link UP Mar 14 00:38:39.327503 systemd-networkd[1385]: lxc_health: Gained carrier Mar 14 00:38:39.554337 systemd-networkd[1385]: lxc8f0fd01237e6: Link UP Mar 14 00:38:39.571272 kernel: eth0: renamed from tmpa6644 Mar 14 00:38:39.577511 systemd-networkd[1385]: lxc8f0fd01237e6: Gained carrier Mar 14 00:38:39.646822 systemd-networkd[1385]: lxcad49f7bd8260: Link UP Mar 14 00:38:39.673239 kernel: eth0: renamed from tmpf392c Mar 14 00:38:39.686257 systemd-networkd[1385]: lxcad49f7bd8260: Gained carrier Mar 14 00:38:40.474919 systemd[1]: run-containerd-runc-k8s.io-a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63-runc.5KXEr5.mount: Deactivated successfully. Mar 14 00:38:40.498286 systemd-networkd[1385]: lxc_health: Gained IPv6LL Mar 14 00:38:40.612898 kubelet[2604]: E0314 00:38:40.612703 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:41.263614 systemd-networkd[1385]: lxc8f0fd01237e6: Gained IPv6LL Mar 14 00:38:41.328470 systemd-networkd[1385]: lxcad49f7bd8260: Gained IPv6LL Mar 14 00:38:41.545498 kubelet[2604]: E0314 00:38:41.544772 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:42.553063 kubelet[2604]: E0314 00:38:42.553022 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:45.720255 containerd[1456]: time="2026-03-14T00:38:45.719847835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:38:45.721600 containerd[1456]: time="2026-03-14T00:38:45.721020024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:38:45.721600 containerd[1456]: time="2026-03-14T00:38:45.721070631Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:45.724044 containerd[1456]: time="2026-03-14T00:38:45.722843165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:45.761425 containerd[1456]: time="2026-03-14T00:38:45.758650557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:38:45.761775 containerd[1456]: time="2026-03-14T00:38:45.761460009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:38:45.762959 containerd[1456]: time="2026-03-14T00:38:45.762759029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:45.764517 containerd[1456]: time="2026-03-14T00:38:45.764377176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:38:45.790498 systemd[1]: Started cri-containerd-f392c36bf62cadd80a9358548b7b8b0b432904f6b98c9eec4e8f2266653d7515.scope - libcontainer container f392c36bf62cadd80a9358548b7b8b0b432904f6b98c9eec4e8f2266653d7515. Mar 14 00:38:45.833362 systemd[1]: Started cri-containerd-a664454f5a8cbbc57a51816b2b9d23fc8cb69255adfaa0faf1446791d66877a0.scope - libcontainer container a664454f5a8cbbc57a51816b2b9d23fc8cb69255adfaa0faf1446791d66877a0. Mar 14 00:38:45.846395 systemd-resolved[1387]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:38:45.876546 systemd-resolved[1387]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 14 00:38:45.952510 containerd[1456]: time="2026-03-14T00:38:45.952412712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9gvgw,Uid:b143a742-8073-49fb-a83b-031ac1f9309c,Namespace:kube-system,Attempt:0,} returns sandbox id \"a664454f5a8cbbc57a51816b2b9d23fc8cb69255adfaa0faf1446791d66877a0\"" Mar 14 00:38:45.954544 kubelet[2604]: E0314 00:38:45.954010 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:45.968276 containerd[1456]: time="2026-03-14T00:38:45.968060104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-kxjqn,Uid:e69dadb0-a1dd-42ff-9fe0-0e43938ff534,Namespace:kube-system,Attempt:0,} returns sandbox id \"f392c36bf62cadd80a9358548b7b8b0b432904f6b98c9eec4e8f2266653d7515\"" Mar 14 00:38:45.970021 kubelet[2604]: E0314 00:38:45.969559 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:45.975796 containerd[1456]: time="2026-03-14T00:38:45.974877615Z" level=info msg="CreateContainer within sandbox \"a664454f5a8cbbc57a51816b2b9d23fc8cb69255adfaa0faf1446791d66877a0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:38:45.989339 containerd[1456]: time="2026-03-14T00:38:45.985412122Z" level=info msg="CreateContainer within sandbox \"f392c36bf62cadd80a9358548b7b8b0b432904f6b98c9eec4e8f2266653d7515\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 14 00:38:46.042840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2384878276.mount: Deactivated successfully. Mar 14 00:38:46.063452 containerd[1456]: time="2026-03-14T00:38:46.063373929Z" level=info msg="CreateContainer within sandbox \"a664454f5a8cbbc57a51816b2b9d23fc8cb69255adfaa0faf1446791d66877a0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"14c1d596b8fa70f38d49cf77437b718e89b2fd503746c203068db17dd192eae5\"" Mar 14 00:38:46.065314 containerd[1456]: time="2026-03-14T00:38:46.065195307Z" level=info msg="StartContainer for \"14c1d596b8fa70f38d49cf77437b718e89b2fd503746c203068db17dd192eae5\"" Mar 14 00:38:46.103290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2709315441.mount: Deactivated successfully. Mar 14 00:38:46.120229 containerd[1456]: time="2026-03-14T00:38:46.116457989Z" level=info msg="CreateContainer within sandbox \"f392c36bf62cadd80a9358548b7b8b0b432904f6b98c9eec4e8f2266653d7515\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c91ff2be6a2208e01051127fb9ba939ab98281e3fd8414d57fb2d3098691d36c\"" Mar 14 00:38:46.122078 containerd[1456]: time="2026-03-14T00:38:46.120609192Z" level=info msg="StartContainer for \"c91ff2be6a2208e01051127fb9ba939ab98281e3fd8414d57fb2d3098691d36c\"" Mar 14 00:38:46.165383 systemd[1]: Started cri-containerd-14c1d596b8fa70f38d49cf77437b718e89b2fd503746c203068db17dd192eae5.scope - libcontainer container 14c1d596b8fa70f38d49cf77437b718e89b2fd503746c203068db17dd192eae5. Mar 14 00:38:46.234075 systemd[1]: Started cri-containerd-c91ff2be6a2208e01051127fb9ba939ab98281e3fd8414d57fb2d3098691d36c.scope - libcontainer container c91ff2be6a2208e01051127fb9ba939ab98281e3fd8414d57fb2d3098691d36c. Mar 14 00:38:46.258449 containerd[1456]: time="2026-03-14T00:38:46.257631564Z" level=info msg="StartContainer for \"14c1d596b8fa70f38d49cf77437b718e89b2fd503746c203068db17dd192eae5\" returns successfully" Mar 14 00:38:46.310915 containerd[1456]: time="2026-03-14T00:38:46.310632183Z" level=info msg="StartContainer for \"c91ff2be6a2208e01051127fb9ba939ab98281e3fd8414d57fb2d3098691d36c\" returns successfully" Mar 14 00:38:46.394242 sudo[1653]: pam_unix(sudo:session): session closed for user root Mar 14 00:38:46.398678 sshd[1650]: pam_unix(sshd:session): session closed for user core Mar 14 00:38:46.406845 systemd[1]: sshd@8-10.0.0.131:22-10.0.0.1:36436.service: Deactivated successfully. Mar 14 00:38:46.414062 systemd[1]: session-9.scope: Deactivated successfully. Mar 14 00:38:46.415031 systemd[1]: session-9.scope: Consumed 18.827s CPU time, 162.0M memory peak, 0B memory swap peak. Mar 14 00:38:46.417620 systemd-logind[1446]: Session 9 logged out. Waiting for processes to exit. Mar 14 00:38:46.427557 systemd-logind[1446]: Removed session 9. Mar 14 00:38:46.584831 kubelet[2604]: E0314 00:38:46.584320 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:46.591508 kubelet[2604]: E0314 00:38:46.590485 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:46.693607 kubelet[2604]: I0314 00:38:46.693450 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9gvgw" podStartSLOduration=54.693428851 podStartE2EDuration="54.693428851s" podCreationTimestamp="2026-03-14 00:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:38:46.66478422 +0000 UTC m=+63.863017909" watchObservedRunningTime="2026-03-14 00:38:46.693428851 +0000 UTC m=+63.891662509" Mar 14 00:38:47.597251 kubelet[2604]: E0314 00:38:47.596649 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:47.599302 kubelet[2604]: E0314 00:38:47.598746 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:47.646684 kubelet[2604]: I0314 00:38:47.645998 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kxjqn" podStartSLOduration=55.645977991 podStartE2EDuration="55.645977991s" podCreationTimestamp="2026-03-14 00:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:38:46.694783393 +0000 UTC m=+63.893017043" watchObservedRunningTime="2026-03-14 00:38:47.645977991 +0000 UTC m=+64.844211640" Mar 14 00:38:48.612467 kubelet[2604]: E0314 00:38:48.611926 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:48.612467 kubelet[2604]: E0314 00:38:48.612438 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:54.607246 kubelet[2604]: E0314 00:38:54.605215 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:38:58.598358 kubelet[2604]: E0314 00:38:58.598035 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:39:01.598682 kubelet[2604]: E0314 00:39:01.598413 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:39:17.602769 kubelet[2604]: E0314 00:39:17.600047 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:39:38.600095 kubelet[2604]: E0314 00:39:38.597653 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:39:58.427074 systemd[1]: Started sshd@9-10.0.0.131:22-10.0.0.1:51294.service - OpenSSH per-connection server daemon (10.0.0.1:51294). Mar 14 00:39:58.571624 sshd[4140]: Accepted publickey for core from 10.0.0.1 port 51294 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:39:58.578750 sshd[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:39:58.607035 systemd-logind[1446]: New session 10 of user core. Mar 14 00:39:58.628630 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 14 00:39:59.643703 kubelet[2604]: E0314 00:39:59.643554 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:39:59.752730 sshd[4140]: pam_unix(sshd:session): session closed for user core Mar 14 00:39:59.764928 systemd[1]: sshd@9-10.0.0.131:22-10.0.0.1:51294.service: Deactivated successfully. Mar 14 00:39:59.769259 systemd[1]: session-10.scope: Deactivated successfully. Mar 14 00:39:59.787809 systemd-logind[1446]: Session 10 logged out. Waiting for processes to exit. Mar 14 00:39:59.791605 systemd-logind[1446]: Removed session 10. Mar 14 00:40:03.607774 kubelet[2604]: E0314 00:40:03.600026 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:40:04.818574 systemd[1]: Started sshd@10-10.0.0.131:22-10.0.0.1:41448.service - OpenSSH per-connection server daemon (10.0.0.1:41448). Mar 14 00:40:04.979847 sshd[4170]: Accepted publickey for core from 10.0.0.1 port 41448 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:40:04.983745 sshd[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:05.031434 systemd-logind[1446]: New session 11 of user core. Mar 14 00:40:05.058407 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 14 00:40:05.634492 sshd[4170]: pam_unix(sshd:session): session closed for user core Mar 14 00:40:05.660405 systemd[1]: sshd@10-10.0.0.131:22-10.0.0.1:41448.service: Deactivated successfully. Mar 14 00:40:05.666456 systemd[1]: session-11.scope: Deactivated successfully. Mar 14 00:40:05.668391 systemd-logind[1446]: Session 11 logged out. Waiting for processes to exit. Mar 14 00:40:05.673520 systemd-logind[1446]: Removed session 11. Mar 14 00:40:09.603447 kubelet[2604]: E0314 00:40:09.601850 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:40:10.694426 systemd[1]: Started sshd@11-10.0.0.131:22-10.0.0.1:36572.service - OpenSSH per-connection server daemon (10.0.0.1:36572). Mar 14 00:40:10.848215 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 36572 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:40:10.854396 sshd[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:10.875992 systemd-logind[1446]: New session 12 of user core. Mar 14 00:40:10.905278 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 14 00:40:11.337951 sshd[4185]: pam_unix(sshd:session): session closed for user core Mar 14 00:40:11.354638 systemd[1]: sshd@11-10.0.0.131:22-10.0.0.1:36572.service: Deactivated successfully. Mar 14 00:40:11.365263 systemd[1]: session-12.scope: Deactivated successfully. Mar 14 00:40:11.369889 systemd-logind[1446]: Session 12 logged out. Waiting for processes to exit. Mar 14 00:40:11.376706 systemd-logind[1446]: Removed session 12. Mar 14 00:40:13.598735 kubelet[2604]: E0314 00:40:13.598633 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:40:13.600417 kubelet[2604]: E0314 00:40:13.600321 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:40:16.402485 systemd[1]: Started sshd@12-10.0.0.131:22-10.0.0.1:36586.service - OpenSSH per-connection server daemon (10.0.0.1:36586). Mar 14 00:40:16.507370 sshd[4201]: Accepted publickey for core from 10.0.0.1 port 36586 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:40:16.517330 sshd[4201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:16.536280 systemd-logind[1446]: New session 13 of user core. Mar 14 00:40:16.550214 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 14 00:40:16.609017 kubelet[2604]: E0314 00:40:16.603649 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:40:16.836743 sshd[4201]: pam_unix(sshd:session): session closed for user core Mar 14 00:40:16.854182 systemd[1]: sshd@12-10.0.0.131:22-10.0.0.1:36586.service: Deactivated successfully. Mar 14 00:40:16.862551 systemd[1]: session-13.scope: Deactivated successfully. Mar 14 00:40:16.870965 systemd-logind[1446]: Session 13 logged out. Waiting for processes to exit. Mar 14 00:40:16.874335 systemd-logind[1446]: Removed session 13. Mar 14 00:40:20.598035 kubelet[2604]: E0314 00:40:20.597756 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:40:21.886582 systemd[1]: Started sshd@13-10.0.0.131:22-10.0.0.1:57550.service - OpenSSH per-connection server daemon (10.0.0.1:57550). Mar 14 00:40:21.981301 sshd[4216]: Accepted publickey for core from 10.0.0.1 port 57550 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:40:21.986709 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:22.017779 systemd-logind[1446]: New session 14 of user core. Mar 14 00:40:22.054791 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 14 00:40:22.380680 sshd[4216]: pam_unix(sshd:session): session closed for user core Mar 14 00:40:22.394720 systemd[1]: sshd@13-10.0.0.131:22-10.0.0.1:57550.service: Deactivated successfully. Mar 14 00:40:22.403425 systemd[1]: session-14.scope: Deactivated successfully. Mar 14 00:40:22.406636 systemd-logind[1446]: Session 14 logged out. Waiting for processes to exit. Mar 14 00:40:22.410624 systemd-logind[1446]: Removed session 14. Mar 14 00:40:27.468479 systemd[1]: Started sshd@14-10.0.0.131:22-10.0.0.1:57554.service - OpenSSH per-connection server daemon (10.0.0.1:57554). Mar 14 00:40:27.570427 sshd[4233]: Accepted publickey for core from 10.0.0.1 port 57554 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:40:27.575307 sshd[4233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:27.606859 systemd-logind[1446]: New session 15 of user core. Mar 14 00:40:27.621370 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 14 00:40:28.270048 sshd[4233]: pam_unix(sshd:session): session closed for user core Mar 14 00:40:28.286672 systemd[1]: sshd@14-10.0.0.131:22-10.0.0.1:57554.service: Deactivated successfully. Mar 14 00:40:28.294878 systemd[1]: session-15.scope: Deactivated successfully. Mar 14 00:40:28.303601 systemd-logind[1446]: Session 15 logged out. Waiting for processes to exit. Mar 14 00:40:28.313886 systemd-logind[1446]: Removed session 15. Mar 14 00:40:33.299010 systemd[1]: Started sshd@15-10.0.0.131:22-10.0.0.1:48336.service - OpenSSH per-connection server daemon (10.0.0.1:48336). Mar 14 00:40:33.424883 sshd[4248]: Accepted publickey for core from 10.0.0.1 port 48336 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:40:33.464434 sshd[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:33.500950 systemd-logind[1446]: New session 16 of user core. Mar 14 00:40:33.509814 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 14 00:40:33.967734 sshd[4248]: pam_unix(sshd:session): session closed for user core Mar 14 00:40:33.982005 systemd[1]: sshd@15-10.0.0.131:22-10.0.0.1:48336.service: Deactivated successfully. Mar 14 00:40:33.994552 systemd[1]: session-16.scope: Deactivated successfully. Mar 14 00:40:33.998336 systemd-logind[1446]: Session 16 logged out. Waiting for processes to exit. Mar 14 00:40:34.004781 systemd-logind[1446]: Removed session 16. Mar 14 00:40:39.019522 systemd[1]: Started sshd@16-10.0.0.131:22-10.0.0.1:48348.service - OpenSSH per-connection server daemon (10.0.0.1:48348). Mar 14 00:40:39.171242 sshd[4263]: Accepted publickey for core from 10.0.0.1 port 48348 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:40:39.178730 sshd[4263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:39.193802 systemd-logind[1446]: New session 17 of user core. Mar 14 00:40:39.211572 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 14 00:40:39.551562 sshd[4263]: pam_unix(sshd:session): session closed for user core Mar 14 00:40:39.565955 systemd[1]: sshd@16-10.0.0.131:22-10.0.0.1:48348.service: Deactivated successfully. Mar 14 00:40:39.569611 systemd[1]: session-17.scope: Deactivated successfully. Mar 14 00:40:39.575592 systemd-logind[1446]: Session 17 logged out. Waiting for processes to exit. Mar 14 00:40:39.591617 systemd[1]: Started sshd@17-10.0.0.131:22-10.0.0.1:48360.service - OpenSSH per-connection server daemon (10.0.0.1:48360). Mar 14 00:40:39.596892 systemd-logind[1446]: Removed session 17. Mar 14 00:40:39.674951 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 48360 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:40:39.676549 sshd[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:39.692714 systemd-logind[1446]: New session 18 of user core. Mar 14 00:40:39.704634 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 14 00:40:40.047858 sshd[4278]: pam_unix(sshd:session): session closed for user core Mar 14 00:40:40.069045 systemd[1]: sshd@17-10.0.0.131:22-10.0.0.1:48360.service: Deactivated successfully. Mar 14 00:40:40.071945 systemd[1]: session-18.scope: Deactivated successfully. Mar 14 00:40:40.077589 systemd-logind[1446]: Session 18 logged out. Waiting for processes to exit. Mar 14 00:40:40.095238 systemd[1]: Started sshd@18-10.0.0.131:22-10.0.0.1:50072.service - OpenSSH per-connection server daemon (10.0.0.1:50072). Mar 14 00:40:40.102608 systemd-logind[1446]: Removed session 18. Mar 14 00:40:40.171823 sshd[4292]: Accepted publickey for core from 10.0.0.1 port 50072 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:40:40.174465 sshd[4292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:40.182284 systemd-logind[1446]: New session 19 of user core. Mar 14 00:40:40.191548 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 14 00:40:40.416320 sshd[4292]: pam_unix(sshd:session): session closed for user core Mar 14 00:40:40.428873 systemd[1]: sshd@18-10.0.0.131:22-10.0.0.1:50072.service: Deactivated successfully. Mar 14 00:40:40.435670 systemd[1]: session-19.scope: Deactivated successfully. Mar 14 00:40:40.442453 systemd-logind[1446]: Session 19 logged out. Waiting for processes to exit. Mar 14 00:40:40.446612 systemd-logind[1446]: Removed session 19. Mar 14 00:40:45.620536 systemd[1]: Started sshd@19-10.0.0.131:22-10.0.0.1:50074.service - OpenSSH per-connection server daemon (10.0.0.1:50074). Mar 14 00:40:45.902867 sshd[4308]: Accepted publickey for core from 10.0.0.1 port 50074 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:40:45.908923 sshd[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:45.940505 systemd-logind[1446]: New session 20 of user core. Mar 14 00:40:45.974336 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 14 00:40:46.528518 sshd[4308]: pam_unix(sshd:session): session closed for user core Mar 14 00:40:46.539786 systemd[1]: sshd@19-10.0.0.131:22-10.0.0.1:50074.service: Deactivated successfully. Mar 14 00:40:46.552734 systemd[1]: session-20.scope: Deactivated successfully. Mar 14 00:40:46.554802 systemd-logind[1446]: Session 20 logged out. Waiting for processes to exit. Mar 14 00:40:46.563321 systemd-logind[1446]: Removed session 20. Mar 14 00:40:51.565442 systemd[1]: Started sshd@20-10.0.0.131:22-10.0.0.1:56078.service - OpenSSH per-connection server daemon (10.0.0.1:56078). Mar 14 00:40:51.655160 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 56078 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:40:51.659883 sshd[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:51.680516 systemd-logind[1446]: New session 21 of user core. Mar 14 00:40:51.688714 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 14 00:40:51.926611 sshd[4322]: pam_unix(sshd:session): session closed for user core Mar 14 00:40:51.939696 systemd[1]: sshd@20-10.0.0.131:22-10.0.0.1:56078.service: Deactivated successfully. Mar 14 00:40:51.943989 systemd[1]: session-21.scope: Deactivated successfully. Mar 14 00:40:51.946695 systemd-logind[1446]: Session 21 logged out. Waiting for processes to exit. Mar 14 00:40:51.951680 systemd-logind[1446]: Removed session 21. Mar 14 00:40:56.949863 systemd[1]: Started sshd@21-10.0.0.131:22-10.0.0.1:56084.service - OpenSSH per-connection server daemon (10.0.0.1:56084). Mar 14 00:40:57.014009 sshd[4338]: Accepted publickey for core from 10.0.0.1 port 56084 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:40:57.017522 sshd[4338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:40:57.027032 systemd-logind[1446]: New session 22 of user core. Mar 14 00:40:57.036496 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 14 00:40:57.201612 sshd[4338]: pam_unix(sshd:session): session closed for user core Mar 14 00:40:57.208839 systemd[1]: sshd@21-10.0.0.131:22-10.0.0.1:56084.service: Deactivated successfully. Mar 14 00:40:57.212800 systemd[1]: session-22.scope: Deactivated successfully. Mar 14 00:40:57.215285 systemd-logind[1446]: Session 22 logged out. Waiting for processes to exit. Mar 14 00:40:57.218092 systemd-logind[1446]: Removed session 22. Mar 14 00:40:58.600050 kubelet[2604]: E0314 00:40:58.599357 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:02.226723 systemd[1]: Started sshd@22-10.0.0.131:22-10.0.0.1:46916.service - OpenSSH per-connection server daemon (10.0.0.1:46916). Mar 14 00:41:02.304719 sshd[4353]: Accepted publickey for core from 10.0.0.1 port 46916 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:41:02.308395 sshd[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:41:02.320828 systemd-logind[1446]: New session 23 of user core. Mar 14 00:41:02.336231 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 14 00:41:02.510373 sshd[4353]: pam_unix(sshd:session): session closed for user core Mar 14 00:41:02.529543 systemd[1]: sshd@22-10.0.0.131:22-10.0.0.1:46916.service: Deactivated successfully. Mar 14 00:41:02.532792 systemd[1]: session-23.scope: Deactivated successfully. Mar 14 00:41:02.536354 systemd-logind[1446]: Session 23 logged out. Waiting for processes to exit. Mar 14 00:41:02.544840 systemd[1]: Started sshd@23-10.0.0.131:22-10.0.0.1:46922.service - OpenSSH per-connection server daemon (10.0.0.1:46922). Mar 14 00:41:02.547407 systemd-logind[1446]: Removed session 23. Mar 14 00:41:02.593455 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 46922 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:41:02.596286 sshd[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:41:02.606627 systemd-logind[1446]: New session 24 of user core. Mar 14 00:41:02.611414 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 14 00:41:03.171459 sshd[4367]: pam_unix(sshd:session): session closed for user core Mar 14 00:41:03.182367 systemd[1]: sshd@23-10.0.0.131:22-10.0.0.1:46922.service: Deactivated successfully. Mar 14 00:41:03.186058 systemd[1]: session-24.scope: Deactivated successfully. Mar 14 00:41:03.188325 systemd-logind[1446]: Session 24 logged out. Waiting for processes to exit. Mar 14 00:41:03.202766 systemd[1]: Started sshd@24-10.0.0.131:22-10.0.0.1:46930.service - OpenSSH per-connection server daemon (10.0.0.1:46930). Mar 14 00:41:03.209460 systemd-logind[1446]: Removed session 24. Mar 14 00:41:03.335752 sshd[4380]: Accepted publickey for core from 10.0.0.1 port 46930 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:41:03.341506 sshd[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:41:03.364898 systemd-logind[1446]: New session 25 of user core. Mar 14 00:41:03.382083 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 14 00:41:04.870619 sshd[4380]: pam_unix(sshd:session): session closed for user core Mar 14 00:41:04.901811 systemd[1]: sshd@24-10.0.0.131:22-10.0.0.1:46930.service: Deactivated successfully. Mar 14 00:41:04.906816 systemd[1]: session-25.scope: Deactivated successfully. Mar 14 00:41:04.912350 systemd-logind[1446]: Session 25 logged out. Waiting for processes to exit. Mar 14 00:41:04.929761 systemd[1]: Started sshd@25-10.0.0.131:22-10.0.0.1:46942.service - OpenSSH per-connection server daemon (10.0.0.1:46942). Mar 14 00:41:04.938549 systemd-logind[1446]: Removed session 25. Mar 14 00:41:05.073882 sshd[4402]: Accepted publickey for core from 10.0.0.1 port 46942 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:41:05.080991 sshd[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:41:05.101284 systemd-logind[1446]: New session 26 of user core. Mar 14 00:41:05.111870 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 14 00:41:05.695946 sshd[4402]: pam_unix(sshd:session): session closed for user core Mar 14 00:41:05.707777 systemd[1]: sshd@25-10.0.0.131:22-10.0.0.1:46942.service: Deactivated successfully. Mar 14 00:41:05.712088 systemd[1]: session-26.scope: Deactivated successfully. Mar 14 00:41:05.714364 systemd-logind[1446]: Session 26 logged out. Waiting for processes to exit. Mar 14 00:41:05.732448 systemd[1]: Started sshd@26-10.0.0.131:22-10.0.0.1:46954.service - OpenSSH per-connection server daemon (10.0.0.1:46954). Mar 14 00:41:05.734806 systemd-logind[1446]: Removed session 26. Mar 14 00:41:05.805412 sshd[4414]: Accepted publickey for core from 10.0.0.1 port 46954 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:41:05.809003 sshd[4414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:41:05.826196 systemd-logind[1446]: New session 27 of user core. Mar 14 00:41:05.838998 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 14 00:41:06.059218 sshd[4414]: pam_unix(sshd:session): session closed for user core Mar 14 00:41:06.067881 systemd[1]: sshd@26-10.0.0.131:22-10.0.0.1:46954.service: Deactivated successfully. Mar 14 00:41:06.076693 systemd[1]: session-27.scope: Deactivated successfully. Mar 14 00:41:06.078499 systemd-logind[1446]: Session 27 logged out. Waiting for processes to exit. Mar 14 00:41:06.082633 systemd-logind[1446]: Removed session 27. Mar 14 00:41:11.112382 systemd[1]: Started sshd@27-10.0.0.131:22-10.0.0.1:37766.service - OpenSSH per-connection server daemon (10.0.0.1:37766). Mar 14 00:41:11.195474 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 37766 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:41:11.199235 sshd[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:41:11.213813 systemd-logind[1446]: New session 28 of user core. Mar 14 00:41:11.224903 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 14 00:41:11.573475 sshd[4431]: pam_unix(sshd:session): session closed for user core Mar 14 00:41:11.592627 systemd[1]: sshd@27-10.0.0.131:22-10.0.0.1:37766.service: Deactivated successfully. Mar 14 00:41:11.598859 systemd[1]: session-28.scope: Deactivated successfully. Mar 14 00:41:11.602230 systemd-logind[1446]: Session 28 logged out. Waiting for processes to exit. Mar 14 00:41:11.605710 systemd-logind[1446]: Removed session 28. Mar 14 00:41:16.618580 systemd[1]: Started sshd@28-10.0.0.131:22-10.0.0.1:37774.service - OpenSSH per-connection server daemon (10.0.0.1:37774). Mar 14 00:41:16.703717 sshd[4447]: Accepted publickey for core from 10.0.0.1 port 37774 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:41:16.707702 sshd[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:41:16.746798 systemd-logind[1446]: New session 29 of user core. Mar 14 00:41:16.757280 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 14 00:41:17.065858 sshd[4447]: pam_unix(sshd:session): session closed for user core Mar 14 00:41:17.087050 systemd[1]: sshd@28-10.0.0.131:22-10.0.0.1:37774.service: Deactivated successfully. Mar 14 00:41:17.102967 systemd[1]: session-29.scope: Deactivated successfully. Mar 14 00:41:17.118320 systemd-logind[1446]: Session 29 logged out. Waiting for processes to exit. Mar 14 00:41:17.125961 systemd-logind[1446]: Removed session 29. Mar 14 00:41:17.613939 kubelet[2604]: E0314 00:41:17.611495 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:19.597954 kubelet[2604]: E0314 00:41:19.597827 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:22.233529 systemd[1]: Started sshd@29-10.0.0.131:22-10.0.0.1:58704.service - OpenSSH per-connection server daemon (10.0.0.1:58704). Mar 14 00:41:22.575764 sshd[4467]: Accepted publickey for core from 10.0.0.1 port 58704 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:41:22.585841 sshd[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:41:22.721424 systemd-logind[1446]: New session 30 of user core. Mar 14 00:41:22.778024 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 14 00:41:28.434829 update_engine[1447]: I20260314 00:41:28.433779 1447 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Mar 14 00:41:28.434829 update_engine[1447]: I20260314 00:41:28.434242 1447 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Mar 14 00:41:28.476254 update_engine[1447]: I20260314 00:41:28.440258 1447 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Mar 14 00:41:28.476254 update_engine[1447]: I20260314 00:41:28.441987 1447 omaha_request_params.cc:62] Current group set to lts Mar 14 00:41:28.478208 update_engine[1447]: I20260314 00:41:28.477634 1447 update_attempter.cc:499] Already updated boot flags. Skipping. Mar 14 00:41:28.478208 update_engine[1447]: I20260314 00:41:28.477734 1447 update_attempter.cc:643] Scheduling an action processor start. Mar 14 00:41:28.478208 update_engine[1447]: I20260314 00:41:28.477813 1447 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 14 00:41:28.479936 update_engine[1447]: I20260314 00:41:28.478653 1447 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Mar 14 00:41:28.484534 update_engine[1447]: I20260314 00:41:28.480597 1447 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 14 00:41:28.484534 update_engine[1447]: I20260314 00:41:28.480625 1447 omaha_request_action.cc:272] Request: Mar 14 00:41:28.484534 update_engine[1447]: Mar 14 00:41:28.484534 update_engine[1447]: Mar 14 00:41:28.484534 update_engine[1447]: Mar 14 00:41:28.484534 update_engine[1447]: Mar 14 00:41:28.484534 update_engine[1447]: Mar 14 00:41:28.484534 update_engine[1447]: Mar 14 00:41:28.484534 update_engine[1447]: Mar 14 00:41:28.484534 update_engine[1447]: Mar 14 00:41:28.484534 update_engine[1447]: I20260314 00:41:28.480876 1447 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:41:28.509233 update_engine[1447]: I20260314 00:41:28.509018 1447 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:41:28.510065 update_engine[1447]: I20260314 00:41:28.509841 1447 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:41:28.907569 locksmithd[1484]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Mar 14 00:41:29.020221 update_engine[1447]: E20260314 00:41:28.678365 1447 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:41:29.020221 update_engine[1447]: I20260314 00:41:28.810850 1447 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Mar 14 00:41:29.114227 kubelet[2604]: E0314 00:41:29.114026 2604 kubelet.go:2618] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.49s" Mar 14 00:41:29.123910 kubelet[2604]: E0314 00:41:29.120664 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:29.256719 sshd[4467]: pam_unix(sshd:session): session closed for user core Mar 14 00:41:29.288577 systemd[1]: sshd@29-10.0.0.131:22-10.0.0.1:58704.service: Deactivated successfully. Mar 14 00:41:29.300617 systemd[1]: session-30.scope: Deactivated successfully. Mar 14 00:41:29.301875 systemd[1]: session-30.scope: Consumed 2.507s CPU time. Mar 14 00:41:29.310425 systemd-logind[1446]: Session 30 logged out. Waiting for processes to exit. Mar 14 00:41:29.321703 systemd-logind[1446]: Removed session 30. Mar 14 00:41:31.609954 kubelet[2604]: E0314 00:41:31.609646 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:33.608666 kubelet[2604]: E0314 00:41:33.599441 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:34.302987 systemd[1]: Started sshd@30-10.0.0.131:22-10.0.0.1:48908.service - OpenSSH per-connection server daemon (10.0.0.1:48908). Mar 14 00:41:34.399869 sshd[4485]: Accepted publickey for core from 10.0.0.1 port 48908 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:41:34.399046 sshd[4485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:41:34.422225 systemd-logind[1446]: New session 31 of user core. Mar 14 00:41:34.436525 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 14 00:41:34.604387 kubelet[2604]: E0314 00:41:34.597911 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:34.776015 sshd[4485]: pam_unix(sshd:session): session closed for user core Mar 14 00:41:34.792311 systemd[1]: sshd@30-10.0.0.131:22-10.0.0.1:48908.service: Deactivated successfully. Mar 14 00:41:34.796776 systemd[1]: session-31.scope: Deactivated successfully. Mar 14 00:41:34.799959 systemd-logind[1446]: Session 31 logged out. Waiting for processes to exit. Mar 14 00:41:34.816867 systemd[1]: Started sshd@31-10.0.0.131:22-10.0.0.1:48910.service - OpenSSH per-connection server daemon (10.0.0.1:48910). Mar 14 00:41:34.824849 systemd-logind[1446]: Removed session 31. Mar 14 00:41:34.901888 sshd[4499]: Accepted publickey for core from 10.0.0.1 port 48910 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:41:34.905236 sshd[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:41:34.934808 systemd-logind[1446]: New session 32 of user core. Mar 14 00:41:34.947096 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 14 00:41:37.140518 containerd[1456]: time="2026-03-14T00:41:37.140426763Z" level=info msg="StopContainer for \"8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43\" with timeout 30 (s)" Mar 14 00:41:37.142624 containerd[1456]: time="2026-03-14T00:41:37.142509872Z" level=info msg="Stop container \"8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43\" with signal terminated" Mar 14 00:41:37.196786 systemd[1]: run-containerd-runc-k8s.io-a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63-runc.n9EEFy.mount: Deactivated successfully. Mar 14 00:41:37.296877 systemd[1]: cri-containerd-8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43.scope: Deactivated successfully. Mar 14 00:41:37.298316 systemd[1]: cri-containerd-8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43.scope: Consumed 2.355s CPU time. Mar 14 00:41:37.350695 containerd[1456]: time="2026-03-14T00:41:37.348010206Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 14 00:41:37.388701 containerd[1456]: time="2026-03-14T00:41:37.386976342Z" level=info msg="StopContainer for \"a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63\" with timeout 2 (s)" Mar 14 00:41:37.388896 containerd[1456]: time="2026-03-14T00:41:37.388840640Z" level=info msg="Stop container \"a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63\" with signal terminated" Mar 14 00:41:37.432545 systemd-networkd[1385]: lxc_health: Link DOWN Mar 14 00:41:37.432560 systemd-networkd[1385]: lxc_health: Lost carrier Mar 14 00:41:37.482175 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43-rootfs.mount: Deactivated successfully. Mar 14 00:41:37.529228 containerd[1456]: time="2026-03-14T00:41:37.520468617Z" level=info msg="shim disconnected" id=8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43 namespace=k8s.io Mar 14 00:41:37.529228 containerd[1456]: time="2026-03-14T00:41:37.525284490Z" level=warning msg="cleaning up after shim disconnected" id=8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43 namespace=k8s.io Mar 14 00:41:37.529228 containerd[1456]: time="2026-03-14T00:41:37.525316203Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:41:37.571414 systemd[1]: cri-containerd-a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63.scope: Deactivated successfully. Mar 14 00:41:37.577745 systemd[1]: cri-containerd-a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63.scope: Consumed 22.004s CPU time. Mar 14 00:41:37.625198 containerd[1456]: time="2026-03-14T00:41:37.623911382Z" level=info msg="StopContainer for \"8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43\" returns successfully" Mar 14 00:41:37.625439 containerd[1456]: time="2026-03-14T00:41:37.625399246Z" level=info msg="StopPodSandbox for \"fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77\"" Mar 14 00:41:37.625589 containerd[1456]: time="2026-03-14T00:41:37.625456260Z" level=info msg="Container to stop \"8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:41:37.630716 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77-shm.mount: Deactivated successfully. Mar 14 00:41:37.654699 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63-rootfs.mount: Deactivated successfully. Mar 14 00:41:37.662677 systemd[1]: cri-containerd-fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77.scope: Deactivated successfully. Mar 14 00:41:37.692273 containerd[1456]: time="2026-03-14T00:41:37.691693185Z" level=info msg="shim disconnected" id=a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63 namespace=k8s.io Mar 14 00:41:37.692273 containerd[1456]: time="2026-03-14T00:41:37.691777773Z" level=warning msg="cleaning up after shim disconnected" id=a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63 namespace=k8s.io Mar 14 00:41:37.692273 containerd[1456]: time="2026-03-14T00:41:37.691795440Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:41:37.754288 containerd[1456]: time="2026-03-14T00:41:37.751870558Z" level=info msg="StopContainer for \"a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63\" returns successfully" Mar 14 00:41:37.754288 containerd[1456]: time="2026-03-14T00:41:37.753896407Z" level=info msg="StopPodSandbox for \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\"" Mar 14 00:41:37.754288 containerd[1456]: time="2026-03-14T00:41:37.753944544Z" level=info msg="Container to stop \"a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:41:37.754288 containerd[1456]: time="2026-03-14T00:41:37.753967280Z" level=info msg="Container to stop \"64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:41:37.754288 containerd[1456]: time="2026-03-14T00:41:37.753984093Z" level=info msg="Container to stop \"43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:41:37.754288 containerd[1456]: time="2026-03-14T00:41:37.754001439Z" level=info msg="Container to stop \"3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:41:37.754288 containerd[1456]: time="2026-03-14T00:41:37.754013522Z" level=info msg="Container to stop \"dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 14 00:41:37.785867 systemd[1]: cri-containerd-a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9.scope: Deactivated successfully. Mar 14 00:41:37.816370 containerd[1456]: time="2026-03-14T00:41:37.816009487Z" level=info msg="shim disconnected" id=fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77 namespace=k8s.io Mar 14 00:41:37.816670 containerd[1456]: time="2026-03-14T00:41:37.816432545Z" level=warning msg="cleaning up after shim disconnected" id=fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77 namespace=k8s.io Mar 14 00:41:37.816670 containerd[1456]: time="2026-03-14T00:41:37.816452685Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:41:37.896253 containerd[1456]: time="2026-03-14T00:41:37.895997606Z" level=info msg="shim disconnected" id=a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9 namespace=k8s.io Mar 14 00:41:37.896253 containerd[1456]: time="2026-03-14T00:41:37.896192938Z" level=warning msg="cleaning up after shim disconnected" id=a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9 namespace=k8s.io Mar 14 00:41:37.896253 containerd[1456]: time="2026-03-14T00:41:37.896215313Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:41:37.900309 containerd[1456]: time="2026-03-14T00:41:37.899607933Z" level=info msg="TearDown network for sandbox \"fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77\" successfully" Mar 14 00:41:37.900309 containerd[1456]: time="2026-03-14T00:41:37.899680578Z" level=info msg="StopPodSandbox for \"fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77\" returns successfully" Mar 14 00:41:37.945414 containerd[1456]: time="2026-03-14T00:41:37.943845444Z" level=info msg="TearDown network for sandbox \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\" successfully" Mar 14 00:41:37.945414 containerd[1456]: time="2026-03-14T00:41:37.943898270Z" level=info msg="StopPodSandbox for \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\" returns successfully" Mar 14 00:41:38.099347 kubelet[2604]: I0314 00:41:38.098420 2604 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-bpf-maps\") pod \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " Mar 14 00:41:38.099347 kubelet[2604]: I0314 00:41:38.098496 2604 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgrgt\" (UniqueName: \"kubernetes.io/projected/15986774-2e6c-4bd6-ae16-0d84ff0809f4-kube-api-access-hgrgt\") pod \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " Mar 14 00:41:38.099347 kubelet[2604]: I0314 00:41:38.098527 2604 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-hostproc\") pod \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " Mar 14 00:41:38.099347 kubelet[2604]: I0314 00:41:38.098552 2604 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-cilium-cgroup\") pod \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " Mar 14 00:41:38.099347 kubelet[2604]: I0314 00:41:38.098577 2604 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-host-proc-sys-kernel\") pod \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " Mar 14 00:41:38.099347 kubelet[2604]: I0314 00:41:38.098633 2604 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-host-proc-sys-net\") pod \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " Mar 14 00:41:38.100490 kubelet[2604]: I0314 00:41:38.098661 2604 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-xtables-lock\") pod \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " Mar 14 00:41:38.100490 kubelet[2604]: I0314 00:41:38.098741 2604 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15986774-2e6c-4bd6-ae16-0d84ff0809f4-clustermesh-secrets\") pod \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " Mar 14 00:41:38.100490 kubelet[2604]: I0314 00:41:38.098766 2604 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-lib-modules\") pod \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " Mar 14 00:41:38.100490 kubelet[2604]: I0314 00:41:38.098789 2604 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-cilium-run\") pod \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " Mar 14 00:41:38.100490 kubelet[2604]: I0314 00:41:38.098813 2604 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15986774-2e6c-4bd6-ae16-0d84ff0809f4-cilium-config-path\") pod \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " Mar 14 00:41:38.100490 kubelet[2604]: I0314 00:41:38.098840 2604 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-etc-cni-netd\") pod \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " Mar 14 00:41:38.100818 kubelet[2604]: I0314 00:41:38.098863 2604 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-cni-path\") pod \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " Mar 14 00:41:38.100818 kubelet[2604]: I0314 00:41:38.098889 2604 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15986774-2e6c-4bd6-ae16-0d84ff0809f4-hubble-tls\") pod \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\" (UID: \"15986774-2e6c-4bd6-ae16-0d84ff0809f4\") " Mar 14 00:41:38.100818 kubelet[2604]: I0314 00:41:38.098923 2604 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnggl\" (UniqueName: \"kubernetes.io/projected/21535549-b343-4f7c-8516-f113607c3178-kube-api-access-hnggl\") pod \"21535549-b343-4f7c-8516-f113607c3178\" (UID: \"21535549-b343-4f7c-8516-f113607c3178\") " Mar 14 00:41:38.100818 kubelet[2604]: I0314 00:41:38.098958 2604 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21535549-b343-4f7c-8516-f113607c3178-cilium-config-path\") pod \"21535549-b343-4f7c-8516-f113607c3178\" (UID: \"21535549-b343-4f7c-8516-f113607c3178\") " Mar 14 00:41:38.104095 kubelet[2604]: I0314 00:41:38.101245 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "15986774-2e6c-4bd6-ae16-0d84ff0809f4" (UID: "15986774-2e6c-4bd6-ae16-0d84ff0809f4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:41:38.104095 kubelet[2604]: I0314 00:41:38.101317 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "15986774-2e6c-4bd6-ae16-0d84ff0809f4" (UID: "15986774-2e6c-4bd6-ae16-0d84ff0809f4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:41:38.104095 kubelet[2604]: I0314 00:41:38.101347 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "15986774-2e6c-4bd6-ae16-0d84ff0809f4" (UID: "15986774-2e6c-4bd6-ae16-0d84ff0809f4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:41:38.104095 kubelet[2604]: I0314 00:41:38.101850 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-cni-path" (OuterVolumeSpecName: "cni-path") pod "15986774-2e6c-4bd6-ae16-0d84ff0809f4" (UID: "15986774-2e6c-4bd6-ae16-0d84ff0809f4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:41:38.104793 kubelet[2604]: I0314 00:41:38.104703 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "15986774-2e6c-4bd6-ae16-0d84ff0809f4" (UID: "15986774-2e6c-4bd6-ae16-0d84ff0809f4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:41:38.104850 kubelet[2604]: I0314 00:41:38.104790 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "15986774-2e6c-4bd6-ae16-0d84ff0809f4" (UID: "15986774-2e6c-4bd6-ae16-0d84ff0809f4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:41:38.107378 kubelet[2604]: I0314 00:41:38.106872 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "15986774-2e6c-4bd6-ae16-0d84ff0809f4" (UID: "15986774-2e6c-4bd6-ae16-0d84ff0809f4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:41:38.108452 kubelet[2604]: I0314 00:41:38.107913 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "15986774-2e6c-4bd6-ae16-0d84ff0809f4" (UID: "15986774-2e6c-4bd6-ae16-0d84ff0809f4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:41:38.109635 kubelet[2604]: I0314 00:41:38.109324 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "15986774-2e6c-4bd6-ae16-0d84ff0809f4" (UID: "15986774-2e6c-4bd6-ae16-0d84ff0809f4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:41:38.109889 kubelet[2604]: I0314 00:41:38.109863 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-hostproc" (OuterVolumeSpecName: "hostproc") pod "15986774-2e6c-4bd6-ae16-0d84ff0809f4" (UID: "15986774-2e6c-4bd6-ae16-0d84ff0809f4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 14 00:41:38.114335 kubelet[2604]: I0314 00:41:38.114250 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15986774-2e6c-4bd6-ae16-0d84ff0809f4-kube-api-access-hgrgt" (OuterVolumeSpecName: "kube-api-access-hgrgt") pod "15986774-2e6c-4bd6-ae16-0d84ff0809f4" (UID: "15986774-2e6c-4bd6-ae16-0d84ff0809f4"). InnerVolumeSpecName "kube-api-access-hgrgt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:41:38.117068 kubelet[2604]: I0314 00:41:38.117034 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21535549-b343-4f7c-8516-f113607c3178-kube-api-access-hnggl" (OuterVolumeSpecName: "kube-api-access-hnggl") pod "21535549-b343-4f7c-8516-f113607c3178" (UID: "21535549-b343-4f7c-8516-f113607c3178"). InnerVolumeSpecName "kube-api-access-hnggl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:41:38.117517 kubelet[2604]: I0314 00:41:38.117435 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15986774-2e6c-4bd6-ae16-0d84ff0809f4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "15986774-2e6c-4bd6-ae16-0d84ff0809f4" (UID: "15986774-2e6c-4bd6-ae16-0d84ff0809f4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 14 00:41:38.119313 kubelet[2604]: I0314 00:41:38.118005 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21535549-b343-4f7c-8516-f113607c3178-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "21535549-b343-4f7c-8516-f113607c3178" (UID: "21535549-b343-4f7c-8516-f113607c3178"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:41:38.119682 kubelet[2604]: I0314 00:41:38.119592 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15986774-2e6c-4bd6-ae16-0d84ff0809f4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "15986774-2e6c-4bd6-ae16-0d84ff0809f4" (UID: "15986774-2e6c-4bd6-ae16-0d84ff0809f4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 14 00:41:38.120023 kubelet[2604]: I0314 00:41:38.119959 2604 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15986774-2e6c-4bd6-ae16-0d84ff0809f4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "15986774-2e6c-4bd6-ae16-0d84ff0809f4" (UID: "15986774-2e6c-4bd6-ae16-0d84ff0809f4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 14 00:41:38.146331 kubelet[2604]: I0314 00:41:38.143782 2604 scope.go:117] "RemoveContainer" containerID="a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63" Mar 14 00:41:38.166085 containerd[1456]: time="2026-03-14T00:41:38.162687493Z" level=info msg="RemoveContainer for \"a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63\"" Mar 14 00:41:38.184562 systemd[1]: Removed slice kubepods-burstable-pod15986774_2e6c_4bd6_ae16_0d84ff0809f4.slice - libcontainer container kubepods-burstable-pod15986774_2e6c_4bd6_ae16_0d84ff0809f4.slice. Mar 14 00:41:38.185467 systemd[1]: kubepods-burstable-pod15986774_2e6c_4bd6_ae16_0d84ff0809f4.slice: Consumed 22.291s CPU time. Mar 14 00:41:38.193048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9-rootfs.mount: Deactivated successfully. Mar 14 00:41:38.193381 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9-shm.mount: Deactivated successfully. Mar 14 00:41:38.193517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77-rootfs.mount: Deactivated successfully. Mar 14 00:41:38.193643 systemd[1]: var-lib-kubelet-pods-15986774\x2d2e6c\x2d4bd6\x2dae16\x2d0d84ff0809f4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhgrgt.mount: Deactivated successfully. Mar 14 00:41:38.193777 systemd[1]: var-lib-kubelet-pods-15986774\x2d2e6c\x2d4bd6\x2dae16\x2d0d84ff0809f4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 14 00:41:38.193919 systemd[1]: var-lib-kubelet-pods-15986774\x2d2e6c\x2d4bd6\x2dae16\x2d0d84ff0809f4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 14 00:41:38.194054 systemd[1]: var-lib-kubelet-pods-21535549\x2db343\x2d4f7c\x2d8516\x2df113607c3178-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhnggl.mount: Deactivated successfully. Mar 14 00:41:38.207258 kubelet[2604]: I0314 00:41:38.206381 2604 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hnggl\" (UniqueName: \"kubernetes.io/projected/21535549-b343-4f7c-8516-f113607c3178-kube-api-access-hnggl\") on node \"localhost\" DevicePath \"\"" Mar 14 00:41:38.207258 kubelet[2604]: I0314 00:41:38.206419 2604 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/21535549-b343-4f7c-8516-f113607c3178-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 14 00:41:38.207258 kubelet[2604]: I0314 00:41:38.206436 2604 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 14 00:41:38.207258 kubelet[2604]: I0314 00:41:38.206454 2604 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hgrgt\" (UniqueName: \"kubernetes.io/projected/15986774-2e6c-4bd6-ae16-0d84ff0809f4-kube-api-access-hgrgt\") on node \"localhost\" DevicePath \"\"" Mar 14 00:41:38.207258 kubelet[2604]: I0314 00:41:38.206466 2604 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 14 00:41:38.207258 kubelet[2604]: I0314 00:41:38.206479 2604 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 14 00:41:38.207258 kubelet[2604]: I0314 00:41:38.206493 2604 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 14 00:41:38.207258 kubelet[2604]: I0314 00:41:38.206507 2604 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 14 00:41:38.207643 kubelet[2604]: I0314 00:41:38.206520 2604 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 14 00:41:38.207643 kubelet[2604]: I0314 00:41:38.206536 2604 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/15986774-2e6c-4bd6-ae16-0d84ff0809f4-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 14 00:41:38.207643 kubelet[2604]: I0314 00:41:38.206551 2604 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 14 00:41:38.207643 kubelet[2604]: I0314 00:41:38.206564 2604 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 14 00:41:38.207643 kubelet[2604]: I0314 00:41:38.206578 2604 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/15986774-2e6c-4bd6-ae16-0d84ff0809f4-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 14 00:41:38.207643 kubelet[2604]: I0314 00:41:38.206592 2604 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 14 00:41:38.207643 kubelet[2604]: I0314 00:41:38.206605 2604 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/15986774-2e6c-4bd6-ae16-0d84ff0809f4-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 14 00:41:38.207643 kubelet[2604]: I0314 00:41:38.206616 2604 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/15986774-2e6c-4bd6-ae16-0d84ff0809f4-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 14 00:41:38.220722 systemd[1]: Removed slice kubepods-besteffort-pod21535549_b343_4f7c_8516_f113607c3178.slice - libcontainer container kubepods-besteffort-pod21535549_b343_4f7c_8516_f113607c3178.slice. Mar 14 00:41:38.225959 kubelet[2604]: I0314 00:41:38.224840 2604 scope.go:117] "RemoveContainer" containerID="dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a" Mar 14 00:41:38.226038 containerd[1456]: time="2026-03-14T00:41:38.221911617Z" level=info msg="RemoveContainer for \"a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63\" returns successfully" Mar 14 00:41:38.220868 systemd[1]: kubepods-besteffort-pod21535549_b343_4f7c_8516_f113607c3178.slice: Consumed 2.406s CPU time. Mar 14 00:41:38.233914 containerd[1456]: time="2026-03-14T00:41:38.233728656Z" level=info msg="RemoveContainer for \"dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a\"" Mar 14 00:41:38.266458 containerd[1456]: time="2026-03-14T00:41:38.266404425Z" level=info msg="RemoveContainer for \"dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a\" returns successfully" Mar 14 00:41:38.271008 kubelet[2604]: I0314 00:41:38.269096 2604 scope.go:117] "RemoveContainer" containerID="3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070" Mar 14 00:41:38.282877 containerd[1456]: time="2026-03-14T00:41:38.281641015Z" level=info msg="RemoveContainer for \"3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070\"" Mar 14 00:41:38.293605 containerd[1456]: time="2026-03-14T00:41:38.293554416Z" level=info msg="RemoveContainer for \"3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070\" returns successfully" Mar 14 00:41:38.295354 kubelet[2604]: I0314 00:41:38.294884 2604 scope.go:117] "RemoveContainer" containerID="64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67" Mar 14 00:41:38.302941 containerd[1456]: time="2026-03-14T00:41:38.302783564Z" level=info msg="RemoveContainer for \"64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67\"" Mar 14 00:41:38.317847 containerd[1456]: time="2026-03-14T00:41:38.317477750Z" level=info msg="RemoveContainer for \"64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67\" returns successfully" Mar 14 00:41:38.317967 kubelet[2604]: I0314 00:41:38.317942 2604 scope.go:117] "RemoveContainer" containerID="43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2" Mar 14 00:41:38.328988 containerd[1456]: time="2026-03-14T00:41:38.328608777Z" level=info msg="RemoveContainer for \"43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2\"" Mar 14 00:41:38.345702 containerd[1456]: time="2026-03-14T00:41:38.345324742Z" level=info msg="RemoveContainer for \"43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2\" returns successfully" Mar 14 00:41:38.345851 kubelet[2604]: I0314 00:41:38.345726 2604 scope.go:117] "RemoveContainer" containerID="a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63" Mar 14 00:41:38.346490 containerd[1456]: time="2026-03-14T00:41:38.346004120Z" level=error msg="ContainerStatus for \"a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63\": not found" Mar 14 00:41:38.355629 kubelet[2604]: E0314 00:41:38.352576 2604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63\": not found" containerID="a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63" Mar 14 00:41:38.355629 kubelet[2604]: I0314 00:41:38.352617 2604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63"} err="failed to get container status \"a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63\": rpc error: code = NotFound desc = an error occurred when try to find container \"a476e732a05c184a3b55d7e469a79efde83b804008cc78d0a224bbbdea567f63\": not found" Mar 14 00:41:38.355629 kubelet[2604]: I0314 00:41:38.352666 2604 scope.go:117] "RemoveContainer" containerID="dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a" Mar 14 00:41:38.355629 kubelet[2604]: E0314 00:41:38.354577 2604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a\": not found" containerID="dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a" Mar 14 00:41:38.355629 kubelet[2604]: I0314 00:41:38.354613 2604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a"} err="failed to get container status \"dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a\": rpc error: code = NotFound desc = an error occurred when try to find container \"dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a\": not found" Mar 14 00:41:38.355629 kubelet[2604]: I0314 00:41:38.354640 2604 scope.go:117] "RemoveContainer" containerID="3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070" Mar 14 00:41:38.367390 containerd[1456]: time="2026-03-14T00:41:38.354089788Z" level=error msg="ContainerStatus for \"dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dc6e05e9418cb62413f2dadcf93a4c3a309014da6bac8dd4e794206677e5a04a\": not found" Mar 14 00:41:38.367390 containerd[1456]: time="2026-03-14T00:41:38.354905451Z" level=error msg="ContainerStatus for \"3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070\": not found" Mar 14 00:41:38.367489 kubelet[2604]: E0314 00:41:38.362021 2604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070\": not found" containerID="3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070" Mar 14 00:41:38.367489 kubelet[2604]: I0314 00:41:38.362051 2604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070"} err="failed to get container status \"3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070\": rpc error: code = NotFound desc = an error occurred when try to find container \"3efc2f50fc90242a1ea14f565277d6d2da4acaede1c066ad99cc6e3f4cd4d070\": not found" Mar 14 00:41:38.367489 kubelet[2604]: I0314 00:41:38.362072 2604 scope.go:117] "RemoveContainer" containerID="64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67" Mar 14 00:41:38.368440 containerd[1456]: time="2026-03-14T00:41:38.367988385Z" level=error msg="ContainerStatus for \"64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67\": not found" Mar 14 00:41:38.368512 kubelet[2604]: E0314 00:41:38.368433 2604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67\": not found" containerID="64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67" Mar 14 00:41:38.368512 kubelet[2604]: I0314 00:41:38.368464 2604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67"} err="failed to get container status \"64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67\": rpc error: code = NotFound desc = an error occurred when try to find container \"64fba081f2b958718b79a53b549932123a93f56a4e0994d0f2ca0d3f1ef1ab67\": not found" Mar 14 00:41:38.368512 kubelet[2604]: I0314 00:41:38.368489 2604 scope.go:117] "RemoveContainer" containerID="43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2" Mar 14 00:41:38.369521 containerd[1456]: time="2026-03-14T00:41:38.368944268Z" level=error msg="ContainerStatus for \"43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2\": not found" Mar 14 00:41:38.373790 kubelet[2604]: E0314 00:41:38.370554 2604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2\": not found" containerID="43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2" Mar 14 00:41:38.373790 kubelet[2604]: I0314 00:41:38.370629 2604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2"} err="failed to get container status \"43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2\": rpc error: code = NotFound desc = an error occurred when try to find container \"43e3429672064369c0687481b784952c8f199ca4c698e62db00ff8babd14f9e2\": not found" Mar 14 00:41:38.373790 kubelet[2604]: I0314 00:41:38.370693 2604 scope.go:117] "RemoveContainer" containerID="8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43" Mar 14 00:41:38.373958 containerd[1456]: time="2026-03-14T00:41:38.372830298Z" level=info msg="RemoveContainer for \"8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43\"" Mar 14 00:41:38.396060 containerd[1456]: time="2026-03-14T00:41:38.395509330Z" level=info msg="RemoveContainer for \"8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43\" returns successfully" Mar 14 00:41:38.396344 kubelet[2604]: I0314 00:41:38.396082 2604 scope.go:117] "RemoveContainer" containerID="8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43" Mar 14 00:41:38.401952 containerd[1456]: time="2026-03-14T00:41:38.401339444Z" level=error msg="ContainerStatus for \"8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43\": not found" Mar 14 00:41:38.402256 kubelet[2604]: E0314 00:41:38.401632 2604 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43\": not found" containerID="8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43" Mar 14 00:41:38.402256 kubelet[2604]: I0314 00:41:38.401680 2604 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43"} err="failed to get container status \"8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43\": rpc error: code = NotFound desc = an error occurred when try to find container \"8af1c83aa1615c2f4c0dda7b89dc67fab6ee0a037c0b4809fc3b4914e4d87d43\": not found" Mar 14 00:41:38.826362 sshd[4499]: pam_unix(sshd:session): session closed for user core Mar 14 00:41:38.844349 systemd[1]: sshd@31-10.0.0.131:22-10.0.0.1:48910.service: Deactivated successfully. Mar 14 00:41:38.865679 systemd[1]: session-32.scope: Deactivated successfully. Mar 14 00:41:38.872550 systemd-logind[1446]: Session 32 logged out. Waiting for processes to exit. Mar 14 00:41:38.890880 systemd[1]: Started sshd@32-10.0.0.131:22-10.0.0.1:48924.service - OpenSSH per-connection server daemon (10.0.0.1:48924). Mar 14 00:41:38.897465 systemd-logind[1446]: Removed session 32. Mar 14 00:41:39.022526 sshd[4661]: Accepted publickey for core from 10.0.0.1 port 48924 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:41:39.026357 sshd[4661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:41:39.045358 kubelet[2604]: E0314 00:41:39.043877 2604 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 00:41:39.051941 systemd-logind[1446]: New session 33 of user core. Mar 14 00:41:39.074542 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 14 00:41:39.392019 update_engine[1447]: I20260314 00:41:39.391146 1447 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:41:39.392019 update_engine[1447]: I20260314 00:41:39.391914 1447 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:41:39.392861 update_engine[1447]: I20260314 00:41:39.392524 1447 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:41:39.411944 update_engine[1447]: E20260314 00:41:39.411787 1447 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:41:39.411944 update_engine[1447]: I20260314 00:41:39.411900 1447 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Mar 14 00:41:39.419472 kubelet[2604]: I0314 00:41:39.419187 2604 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-14T00:41:39Z","lastTransitionTime":"2026-03-14T00:41:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 14 00:41:39.598838 kubelet[2604]: E0314 00:41:39.598757 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:39.609656 kubelet[2604]: I0314 00:41:39.608869 2604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15986774-2e6c-4bd6-ae16-0d84ff0809f4" path="/var/lib/kubelet/pods/15986774-2e6c-4bd6-ae16-0d84ff0809f4/volumes" Mar 14 00:41:39.612494 kubelet[2604]: I0314 00:41:39.612329 2604 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21535549-b343-4f7c-8516-f113607c3178" path="/var/lib/kubelet/pods/21535549-b343-4f7c-8516-f113607c3178/volumes" Mar 14 00:41:40.406065 sshd[4661]: pam_unix(sshd:session): session closed for user core Mar 14 00:41:40.428497 systemd[1]: sshd@32-10.0.0.131:22-10.0.0.1:48924.service: Deactivated successfully. Mar 14 00:41:40.433992 systemd[1]: session-33.scope: Deactivated successfully. Mar 14 00:41:40.437890 systemd-logind[1446]: Session 33 logged out. Waiting for processes to exit. Mar 14 00:41:40.476848 systemd[1]: Started sshd@33-10.0.0.131:22-10.0.0.1:54674.service - OpenSSH per-connection server daemon (10.0.0.1:54674). Mar 14 00:41:40.479210 systemd-logind[1446]: Removed session 33. Mar 14 00:41:40.543903 kubelet[2604]: I0314 00:41:40.543240 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f8af433a-94c9-4d3d-9378-8e98caca5ebd-cilium-cgroup\") pod \"cilium-hck25\" (UID: \"f8af433a-94c9-4d3d-9378-8e98caca5ebd\") " pod="kube-system/cilium-hck25" Mar 14 00:41:40.569967 kubelet[2604]: I0314 00:41:40.569282 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f8af433a-94c9-4d3d-9378-8e98caca5ebd-cilium-config-path\") pod \"cilium-hck25\" (UID: \"f8af433a-94c9-4d3d-9378-8e98caca5ebd\") " pod="kube-system/cilium-hck25" Mar 14 00:41:40.569967 kubelet[2604]: I0314 00:41:40.569385 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f8af433a-94c9-4d3d-9378-8e98caca5ebd-etc-cni-netd\") pod \"cilium-hck25\" (UID: \"f8af433a-94c9-4d3d-9378-8e98caca5ebd\") " pod="kube-system/cilium-hck25" Mar 14 00:41:40.569967 kubelet[2604]: I0314 00:41:40.569516 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f8af433a-94c9-4d3d-9378-8e98caca5ebd-xtables-lock\") pod \"cilium-hck25\" (UID: \"f8af433a-94c9-4d3d-9378-8e98caca5ebd\") " pod="kube-system/cilium-hck25" Mar 14 00:41:40.569967 kubelet[2604]: I0314 00:41:40.569543 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f8af433a-94c9-4d3d-9378-8e98caca5ebd-host-proc-sys-net\") pod \"cilium-hck25\" (UID: \"f8af433a-94c9-4d3d-9378-8e98caca5ebd\") " pod="kube-system/cilium-hck25" Mar 14 00:41:40.569967 kubelet[2604]: I0314 00:41:40.569570 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhxh9\" (UniqueName: \"kubernetes.io/projected/f8af433a-94c9-4d3d-9378-8e98caca5ebd-kube-api-access-dhxh9\") pod \"cilium-hck25\" (UID: \"f8af433a-94c9-4d3d-9378-8e98caca5ebd\") " pod="kube-system/cilium-hck25" Mar 14 00:41:40.569967 kubelet[2604]: I0314 00:41:40.569597 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f8af433a-94c9-4d3d-9378-8e98caca5ebd-bpf-maps\") pod \"cilium-hck25\" (UID: \"f8af433a-94c9-4d3d-9378-8e98caca5ebd\") " pod="kube-system/cilium-hck25" Mar 14 00:41:40.570447 kubelet[2604]: I0314 00:41:40.569618 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f8af433a-94c9-4d3d-9378-8e98caca5ebd-hostproc\") pod \"cilium-hck25\" (UID: \"f8af433a-94c9-4d3d-9378-8e98caca5ebd\") " pod="kube-system/cilium-hck25" Mar 14 00:41:40.570447 kubelet[2604]: I0314 00:41:40.569710 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f8af433a-94c9-4d3d-9378-8e98caca5ebd-hubble-tls\") pod \"cilium-hck25\" (UID: \"f8af433a-94c9-4d3d-9378-8e98caca5ebd\") " pod="kube-system/cilium-hck25" Mar 14 00:41:40.570447 kubelet[2604]: I0314 00:41:40.569731 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f8af433a-94c9-4d3d-9378-8e98caca5ebd-clustermesh-secrets\") pod \"cilium-hck25\" (UID: \"f8af433a-94c9-4d3d-9378-8e98caca5ebd\") " pod="kube-system/cilium-hck25" Mar 14 00:41:40.570447 kubelet[2604]: I0314 00:41:40.569751 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f8af433a-94c9-4d3d-9378-8e98caca5ebd-host-proc-sys-kernel\") pod \"cilium-hck25\" (UID: \"f8af433a-94c9-4d3d-9378-8e98caca5ebd\") " pod="kube-system/cilium-hck25" Mar 14 00:41:40.570447 kubelet[2604]: I0314 00:41:40.569775 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f8af433a-94c9-4d3d-9378-8e98caca5ebd-cilium-run\") pod \"cilium-hck25\" (UID: \"f8af433a-94c9-4d3d-9378-8e98caca5ebd\") " pod="kube-system/cilium-hck25" Mar 14 00:41:40.570447 kubelet[2604]: I0314 00:41:40.569797 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f8af433a-94c9-4d3d-9378-8e98caca5ebd-cni-path\") pod \"cilium-hck25\" (UID: \"f8af433a-94c9-4d3d-9378-8e98caca5ebd\") " pod="kube-system/cilium-hck25" Mar 14 00:41:40.572862 kubelet[2604]: I0314 00:41:40.569815 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f8af433a-94c9-4d3d-9378-8e98caca5ebd-lib-modules\") pod \"cilium-hck25\" (UID: \"f8af433a-94c9-4d3d-9378-8e98caca5ebd\") " pod="kube-system/cilium-hck25" Mar 14 00:41:40.572862 kubelet[2604]: I0314 00:41:40.569834 2604 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f8af433a-94c9-4d3d-9378-8e98caca5ebd-cilium-ipsec-secrets\") pod \"cilium-hck25\" (UID: \"f8af433a-94c9-4d3d-9378-8e98caca5ebd\") " pod="kube-system/cilium-hck25" Mar 14 00:41:40.581696 systemd[1]: Created slice kubepods-burstable-podf8af433a_94c9_4d3d_9378_8e98caca5ebd.slice - libcontainer container kubepods-burstable-podf8af433a_94c9_4d3d_9378_8e98caca5ebd.slice. Mar 14 00:41:40.588305 sshd[4674]: Accepted publickey for core from 10.0.0.1 port 54674 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:41:40.593862 sshd[4674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:41:40.636038 systemd-logind[1446]: New session 34 of user core. Mar 14 00:41:40.647665 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 14 00:41:40.801697 sshd[4674]: pam_unix(sshd:session): session closed for user core Mar 14 00:41:40.823671 systemd[1]: sshd@33-10.0.0.131:22-10.0.0.1:54674.service: Deactivated successfully. Mar 14 00:41:40.829309 systemd[1]: session-34.scope: Deactivated successfully. Mar 14 00:41:40.837300 systemd-logind[1446]: Session 34 logged out. Waiting for processes to exit. Mar 14 00:41:40.866282 systemd[1]: Started sshd@34-10.0.0.131:22-10.0.0.1:54686.service - OpenSSH per-connection server daemon (10.0.0.1:54686). Mar 14 00:41:40.873885 systemd-logind[1446]: Removed session 34. Mar 14 00:41:40.935852 sshd[4687]: Accepted publickey for core from 10.0.0.1 port 54686 ssh2: RSA SHA256:f74FwPP26lrmXR7Yk+jC5n6SYLxHwL6pGZ9lm/UPur4 Mar 14 00:41:40.933222 sshd[4687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 14 00:41:40.945434 kubelet[2604]: E0314 00:41:40.944957 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:40.946231 containerd[1456]: time="2026-03-14T00:41:40.946061855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hck25,Uid:f8af433a-94c9-4d3d-9378-8e98caca5ebd,Namespace:kube-system,Attempt:0,}" Mar 14 00:41:40.971378 systemd-logind[1446]: New session 35 of user core. Mar 14 00:41:40.991861 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 14 00:41:41.087270 containerd[1456]: time="2026-03-14T00:41:41.083780200Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 14 00:41:41.087270 containerd[1456]: time="2026-03-14T00:41:41.084276791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 14 00:41:41.087270 containerd[1456]: time="2026-03-14T00:41:41.084305077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:41:41.087270 containerd[1456]: time="2026-03-14T00:41:41.084491960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 14 00:41:41.179631 systemd[1]: Started cri-containerd-6d3816d6a3b2f4fdba6c2ed50ba5ece8c5150c4d5f9a6d4a27d76baa41fae2f9.scope - libcontainer container 6d3816d6a3b2f4fdba6c2ed50ba5ece8c5150c4d5f9a6d4a27d76baa41fae2f9. Mar 14 00:41:41.283418 containerd[1456]: time="2026-03-14T00:41:41.283208000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-hck25,Uid:f8af433a-94c9-4d3d-9378-8e98caca5ebd,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d3816d6a3b2f4fdba6c2ed50ba5ece8c5150c4d5f9a6d4a27d76baa41fae2f9\"" Mar 14 00:41:41.285810 kubelet[2604]: E0314 00:41:41.284625 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:41.296370 containerd[1456]: time="2026-03-14T00:41:41.296061962Z" level=info msg="CreateContainer within sandbox \"6d3816d6a3b2f4fdba6c2ed50ba5ece8c5150c4d5f9a6d4a27d76baa41fae2f9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 14 00:41:41.440331 containerd[1456]: time="2026-03-14T00:41:41.439395618Z" level=info msg="CreateContainer within sandbox \"6d3816d6a3b2f4fdba6c2ed50ba5ece8c5150c4d5f9a6d4a27d76baa41fae2f9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"02ca7f29cd8c0ee580297b0516e695dedac0178bf8570355c9bee37eb222c0ea\"" Mar 14 00:41:41.453513 containerd[1456]: time="2026-03-14T00:41:41.446407526Z" level=info msg="StartContainer for \"02ca7f29cd8c0ee580297b0516e695dedac0178bf8570355c9bee37eb222c0ea\"" Mar 14 00:41:41.538717 systemd[1]: Started cri-containerd-02ca7f29cd8c0ee580297b0516e695dedac0178bf8570355c9bee37eb222c0ea.scope - libcontainer container 02ca7f29cd8c0ee580297b0516e695dedac0178bf8570355c9bee37eb222c0ea. Mar 14 00:41:41.614678 containerd[1456]: time="2026-03-14T00:41:41.614435541Z" level=info msg="StartContainer for \"02ca7f29cd8c0ee580297b0516e695dedac0178bf8570355c9bee37eb222c0ea\" returns successfully" Mar 14 00:41:41.679299 systemd[1]: cri-containerd-02ca7f29cd8c0ee580297b0516e695dedac0178bf8570355c9bee37eb222c0ea.scope: Deactivated successfully. Mar 14 00:41:41.763812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02ca7f29cd8c0ee580297b0516e695dedac0178bf8570355c9bee37eb222c0ea-rootfs.mount: Deactivated successfully. Mar 14 00:41:41.806684 containerd[1456]: time="2026-03-14T00:41:41.806005100Z" level=info msg="shim disconnected" id=02ca7f29cd8c0ee580297b0516e695dedac0178bf8570355c9bee37eb222c0ea namespace=k8s.io Mar 14 00:41:41.806684 containerd[1456]: time="2026-03-14T00:41:41.806079800Z" level=warning msg="cleaning up after shim disconnected" id=02ca7f29cd8c0ee580297b0516e695dedac0178bf8570355c9bee37eb222c0ea namespace=k8s.io Mar 14 00:41:41.806684 containerd[1456]: time="2026-03-14T00:41:41.806097915Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:41:41.865860 containerd[1456]: time="2026-03-14T00:41:41.861823674Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:41:41Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:41:42.213469 kubelet[2604]: E0314 00:41:42.210318 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:42.234363 containerd[1456]: time="2026-03-14T00:41:42.234312261Z" level=info msg="CreateContainer within sandbox \"6d3816d6a3b2f4fdba6c2ed50ba5ece8c5150c4d5f9a6d4a27d76baa41fae2f9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 14 00:41:42.328235 containerd[1456]: time="2026-03-14T00:41:42.328042304Z" level=info msg="CreateContainer within sandbox \"6d3816d6a3b2f4fdba6c2ed50ba5ece8c5150c4d5f9a6d4a27d76baa41fae2f9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7ad37d1b1df18581ce5279131c5babf6c03423b27b4873e92b5fa916abc3745c\"" Mar 14 00:41:42.332485 containerd[1456]: time="2026-03-14T00:41:42.332332718Z" level=info msg="StartContainer for \"7ad37d1b1df18581ce5279131c5babf6c03423b27b4873e92b5fa916abc3745c\"" Mar 14 00:41:42.456505 systemd[1]: Started cri-containerd-7ad37d1b1df18581ce5279131c5babf6c03423b27b4873e92b5fa916abc3745c.scope - libcontainer container 7ad37d1b1df18581ce5279131c5babf6c03423b27b4873e92b5fa916abc3745c. Mar 14 00:41:42.598940 containerd[1456]: time="2026-03-14T00:41:42.596552283Z" level=info msg="StartContainer for \"7ad37d1b1df18581ce5279131c5babf6c03423b27b4873e92b5fa916abc3745c\" returns successfully" Mar 14 00:41:42.624981 systemd[1]: cri-containerd-7ad37d1b1df18581ce5279131c5babf6c03423b27b4873e92b5fa916abc3745c.scope: Deactivated successfully. Mar 14 00:41:42.732658 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7ad37d1b1df18581ce5279131c5babf6c03423b27b4873e92b5fa916abc3745c-rootfs.mount: Deactivated successfully. Mar 14 00:41:42.743586 containerd[1456]: time="2026-03-14T00:41:42.742998892Z" level=info msg="shim disconnected" id=7ad37d1b1df18581ce5279131c5babf6c03423b27b4873e92b5fa916abc3745c namespace=k8s.io Mar 14 00:41:42.743586 containerd[1456]: time="2026-03-14T00:41:42.743229391Z" level=warning msg="cleaning up after shim disconnected" id=7ad37d1b1df18581ce5279131c5babf6c03423b27b4873e92b5fa916abc3745c namespace=k8s.io Mar 14 00:41:42.743586 containerd[1456]: time="2026-03-14T00:41:42.743246365Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:41:42.842909 containerd[1456]: time="2026-03-14T00:41:42.842703978Z" level=warning msg="cleanup warnings time=\"2026-03-14T00:41:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 14 00:41:43.222585 kubelet[2604]: E0314 00:41:43.222548 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:43.261683 containerd[1456]: time="2026-03-14T00:41:43.248555244Z" level=info msg="CreateContainer within sandbox \"6d3816d6a3b2f4fdba6c2ed50ba5ece8c5150c4d5f9a6d4a27d76baa41fae2f9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 14 00:41:43.336868 containerd[1456]: time="2026-03-14T00:41:43.336621323Z" level=info msg="CreateContainer within sandbox \"6d3816d6a3b2f4fdba6c2ed50ba5ece8c5150c4d5f9a6d4a27d76baa41fae2f9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"14886adecad821e2d18723dcfbe41e09dc4d2954371c42338bb9b5a8f1956dc9\"" Mar 14 00:41:43.341437 containerd[1456]: time="2026-03-14T00:41:43.338200148Z" level=info msg="StartContainer for \"14886adecad821e2d18723dcfbe41e09dc4d2954371c42338bb9b5a8f1956dc9\"" Mar 14 00:41:43.443494 systemd[1]: Started cri-containerd-14886adecad821e2d18723dcfbe41e09dc4d2954371c42338bb9b5a8f1956dc9.scope - libcontainer container 14886adecad821e2d18723dcfbe41e09dc4d2954371c42338bb9b5a8f1956dc9. Mar 14 00:41:43.609169 containerd[1456]: time="2026-03-14T00:41:43.608444297Z" level=info msg="StopPodSandbox for \"fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77\"" Mar 14 00:41:43.609169 containerd[1456]: time="2026-03-14T00:41:43.608567993Z" level=info msg="TearDown network for sandbox \"fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77\" successfully" Mar 14 00:41:43.609169 containerd[1456]: time="2026-03-14T00:41:43.608584255Z" level=info msg="StopPodSandbox for \"fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77\" returns successfully" Mar 14 00:41:43.612830 containerd[1456]: time="2026-03-14T00:41:43.609827842Z" level=info msg="RemovePodSandbox for \"fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77\"" Mar 14 00:41:43.616756 containerd[1456]: time="2026-03-14T00:41:43.616606238Z" level=info msg="Forcibly stopping sandbox \"fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77\"" Mar 14 00:41:43.621584 containerd[1456]: time="2026-03-14T00:41:43.616765695Z" level=info msg="TearDown network for sandbox \"fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77\" successfully" Mar 14 00:41:43.624653 containerd[1456]: time="2026-03-14T00:41:43.624551448Z" level=info msg="StartContainer for \"14886adecad821e2d18723dcfbe41e09dc4d2954371c42338bb9b5a8f1956dc9\" returns successfully" Mar 14 00:41:43.631824 containerd[1456]: time="2026-03-14T00:41:43.631737702Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:41:43.632218 containerd[1456]: time="2026-03-14T00:41:43.631954173Z" level=info msg="RemovePodSandbox \"fb1b58c512aabc850e68335e4c08541deed9a68c6cc62b7eb2f70277fcfc8a77\" returns successfully" Mar 14 00:41:43.633209 containerd[1456]: time="2026-03-14T00:41:43.632945878Z" level=info msg="StopPodSandbox for \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\"" Mar 14 00:41:43.633209 containerd[1456]: time="2026-03-14T00:41:43.633060125Z" level=info msg="TearDown network for sandbox \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\" successfully" Mar 14 00:41:43.633209 containerd[1456]: time="2026-03-14T00:41:43.633081218Z" level=info msg="StopPodSandbox for \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\" returns successfully" Mar 14 00:41:43.638577 containerd[1456]: time="2026-03-14T00:41:43.638334361Z" level=info msg="RemovePodSandbox for \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\"" Mar 14 00:41:43.638577 containerd[1456]: time="2026-03-14T00:41:43.638399240Z" level=info msg="Forcibly stopping sandbox \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\"" Mar 14 00:41:43.638577 containerd[1456]: time="2026-03-14T00:41:43.638540562Z" level=info msg="TearDown network for sandbox \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\" successfully" Mar 14 00:41:43.641054 systemd[1]: cri-containerd-14886adecad821e2d18723dcfbe41e09dc4d2954371c42338bb9b5a8f1956dc9.scope: Deactivated successfully. Mar 14 00:41:43.666843 containerd[1456]: time="2026-03-14T00:41:43.666639375Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 14 00:41:43.666843 containerd[1456]: time="2026-03-14T00:41:43.666832941Z" level=info msg="RemovePodSandbox \"a0fd65940b6d9a2217ae7d515783ab9c20150e746c826563b92529e98440dfe9\" returns successfully" Mar 14 00:41:43.746860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14886adecad821e2d18723dcfbe41e09dc4d2954371c42338bb9b5a8f1956dc9-rootfs.mount: Deactivated successfully. Mar 14 00:41:43.802782 containerd[1456]: time="2026-03-14T00:41:43.802000666Z" level=info msg="shim disconnected" id=14886adecad821e2d18723dcfbe41e09dc4d2954371c42338bb9b5a8f1956dc9 namespace=k8s.io Mar 14 00:41:43.802782 containerd[1456]: time="2026-03-14T00:41:43.802064904Z" level=warning msg="cleaning up after shim disconnected" id=14886adecad821e2d18723dcfbe41e09dc4d2954371c42338bb9b5a8f1956dc9 namespace=k8s.io Mar 14 00:41:43.802782 containerd[1456]: time="2026-03-14T00:41:43.802077310Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:41:44.046462 kubelet[2604]: E0314 00:41:44.046082 2604 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 14 00:41:44.239248 kubelet[2604]: E0314 00:41:44.238668 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:44.271429 containerd[1456]: time="2026-03-14T00:41:44.271223236Z" level=info msg="CreateContainer within sandbox \"6d3816d6a3b2f4fdba6c2ed50ba5ece8c5150c4d5f9a6d4a27d76baa41fae2f9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 14 00:41:44.336352 containerd[1456]: time="2026-03-14T00:41:44.335642617Z" level=info msg="CreateContainer within sandbox \"6d3816d6a3b2f4fdba6c2ed50ba5ece8c5150c4d5f9a6d4a27d76baa41fae2f9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7324f0bebe23b9c777ea23ea6266bd44dc3e215676204563bf469f78a26467bc\"" Mar 14 00:41:44.338084 containerd[1456]: time="2026-03-14T00:41:44.338043432Z" level=info msg="StartContainer for \"7324f0bebe23b9c777ea23ea6266bd44dc3e215676204563bf469f78a26467bc\"" Mar 14 00:41:44.437731 systemd[1]: Started cri-containerd-7324f0bebe23b9c777ea23ea6266bd44dc3e215676204563bf469f78a26467bc.scope - libcontainer container 7324f0bebe23b9c777ea23ea6266bd44dc3e215676204563bf469f78a26467bc. Mar 14 00:41:44.501761 systemd[1]: cri-containerd-7324f0bebe23b9c777ea23ea6266bd44dc3e215676204563bf469f78a26467bc.scope: Deactivated successfully. Mar 14 00:41:44.505295 containerd[1456]: time="2026-03-14T00:41:44.504508553Z" level=info msg="StartContainer for \"7324f0bebe23b9c777ea23ea6266bd44dc3e215676204563bf469f78a26467bc\" returns successfully" Mar 14 00:41:44.592424 containerd[1456]: time="2026-03-14T00:41:44.589803411Z" level=info msg="shim disconnected" id=7324f0bebe23b9c777ea23ea6266bd44dc3e215676204563bf469f78a26467bc namespace=k8s.io Mar 14 00:41:44.592424 containerd[1456]: time="2026-03-14T00:41:44.589873039Z" level=warning msg="cleaning up after shim disconnected" id=7324f0bebe23b9c777ea23ea6266bd44dc3e215676204563bf469f78a26467bc namespace=k8s.io Mar 14 00:41:44.592424 containerd[1456]: time="2026-03-14T00:41:44.589887448Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 14 00:41:44.707703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7324f0bebe23b9c777ea23ea6266bd44dc3e215676204563bf469f78a26467bc-rootfs.mount: Deactivated successfully. Mar 14 00:41:45.263707 kubelet[2604]: E0314 00:41:45.262214 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:45.297198 containerd[1456]: time="2026-03-14T00:41:45.296924380Z" level=info msg="CreateContainer within sandbox \"6d3816d6a3b2f4fdba6c2ed50ba5ece8c5150c4d5f9a6d4a27d76baa41fae2f9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 14 00:41:45.357810 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2730634935.mount: Deactivated successfully. Mar 14 00:41:45.416821 containerd[1456]: time="2026-03-14T00:41:45.416635951Z" level=info msg="CreateContainer within sandbox \"6d3816d6a3b2f4fdba6c2ed50ba5ece8c5150c4d5f9a6d4a27d76baa41fae2f9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"943f02cf06f6a1de2db830e75649e6e0e4120b36f7bfa0f74b7a3e3ac08dd048\"" Mar 14 00:41:45.420261 containerd[1456]: time="2026-03-14T00:41:45.419297943Z" level=info msg="StartContainer for \"943f02cf06f6a1de2db830e75649e6e0e4120b36f7bfa0f74b7a3e3ac08dd048\"" Mar 14 00:41:45.553713 systemd[1]: Started cri-containerd-943f02cf06f6a1de2db830e75649e6e0e4120b36f7bfa0f74b7a3e3ac08dd048.scope - libcontainer container 943f02cf06f6a1de2db830e75649e6e0e4120b36f7bfa0f74b7a3e3ac08dd048. Mar 14 00:41:45.787304 containerd[1456]: time="2026-03-14T00:41:45.783320407Z" level=info msg="StartContainer for \"943f02cf06f6a1de2db830e75649e6e0e4120b36f7bfa0f74b7a3e3ac08dd048\" returns successfully" Mar 14 00:41:45.928746 systemd[1]: run-containerd-runc-k8s.io-943f02cf06f6a1de2db830e75649e6e0e4120b36f7bfa0f74b7a3e3ac08dd048-runc.N2ZDiW.mount: Deactivated successfully. Mar 14 00:41:47.310635 kubelet[2604]: E0314 00:41:47.308667 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:47.427908 kubelet[2604]: I0314 00:41:47.424506 2604 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-hck25" podStartSLOduration=7.4244855229999995 podStartE2EDuration="7.424485523s" podCreationTimestamp="2026-03-14 00:41:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-14 00:41:47.41891563 +0000 UTC m=+244.617149289" watchObservedRunningTime="2026-03-14 00:41:47.424485523 +0000 UTC m=+244.622719162" Mar 14 00:41:47.870236 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 14 00:41:48.924233 kubelet[2604]: E0314 00:41:48.922798 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:49.393050 update_engine[1447]: I20260314 00:41:49.392835 1447 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:41:49.393943 update_engine[1447]: I20260314 00:41:49.393469 1447 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:41:49.394545 update_engine[1447]: I20260314 00:41:49.394354 1447 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:41:49.413237 update_engine[1447]: E20260314 00:41:49.413002 1447 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:41:49.413237 update_engine[1447]: I20260314 00:41:49.413218 1447 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Mar 14 00:41:54.331002 systemd-networkd[1385]: lxc_health: Link UP Mar 14 00:41:54.388471 systemd-networkd[1385]: lxc_health: Gained carrier Mar 14 00:41:54.929203 kubelet[2604]: E0314 00:41:54.929003 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:55.380807 kubelet[2604]: E0314 00:41:55.377816 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:55.702737 systemd-networkd[1385]: lxc_health: Gained IPv6LL Mar 14 00:41:56.381920 kubelet[2604]: E0314 00:41:56.381692 2604 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 14 00:41:59.393853 update_engine[1447]: I20260314 00:41:59.393639 1447 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:41:59.394611 update_engine[1447]: I20260314 00:41:59.394371 1447 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:41:59.394948 update_engine[1447]: I20260314 00:41:59.394783 1447 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:41:59.417815 update_engine[1447]: E20260314 00:41:59.417644 1447 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:41:59.420075 update_engine[1447]: I20260314 00:41:59.419939 1447 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 14 00:41:59.420075 update_engine[1447]: I20260314 00:41:59.420014 1447 omaha_request_action.cc:617] Omaha request response: Mar 14 00:41:59.420421 update_engine[1447]: E20260314 00:41:59.420257 1447 omaha_request_action.cc:636] Omaha request network transfer failed. Mar 14 00:41:59.420421 update_engine[1447]: I20260314 00:41:59.420298 1447 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Mar 14 00:41:59.420421 update_engine[1447]: I20260314 00:41:59.420313 1447 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 14 00:41:59.420421 update_engine[1447]: I20260314 00:41:59.420327 1447 update_attempter.cc:306] Processing Done. Mar 14 00:41:59.420421 update_engine[1447]: E20260314 00:41:59.420353 1447 update_attempter.cc:619] Update failed. Mar 14 00:41:59.420421 update_engine[1447]: I20260314 00:41:59.420367 1447 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Mar 14 00:41:59.420421 update_engine[1447]: I20260314 00:41:59.420380 1447 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Mar 14 00:41:59.420421 update_engine[1447]: I20260314 00:41:59.420394 1447 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Mar 14 00:41:59.421082 update_engine[1447]: I20260314 00:41:59.420527 1447 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Mar 14 00:41:59.421082 update_engine[1447]: I20260314 00:41:59.420565 1447 omaha_request_action.cc:271] Posting an Omaha request to disabled Mar 14 00:41:59.421082 update_engine[1447]: I20260314 00:41:59.420576 1447 omaha_request_action.cc:272] Request: Mar 14 00:41:59.421082 update_engine[1447]: Mar 14 00:41:59.421082 update_engine[1447]: Mar 14 00:41:59.421082 update_engine[1447]: Mar 14 00:41:59.421082 update_engine[1447]: Mar 14 00:41:59.421082 update_engine[1447]: Mar 14 00:41:59.421082 update_engine[1447]: Mar 14 00:41:59.421082 update_engine[1447]: I20260314 00:41:59.420588 1447 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Mar 14 00:41:59.421082 update_engine[1447]: I20260314 00:41:59.421015 1447 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Mar 14 00:41:59.422097 update_engine[1447]: I20260314 00:41:59.421403 1447 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Mar 14 00:41:59.422523 locksmithd[1484]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Mar 14 00:41:59.441820 update_engine[1447]: E20260314 00:41:59.441586 1447 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Mar 14 00:41:59.441943 update_engine[1447]: I20260314 00:41:59.441809 1447 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Mar 14 00:41:59.441943 update_engine[1447]: I20260314 00:41:59.441840 1447 omaha_request_action.cc:617] Omaha request response: Mar 14 00:41:59.441943 update_engine[1447]: I20260314 00:41:59.441859 1447 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 14 00:41:59.441943 update_engine[1447]: I20260314 00:41:59.441874 1447 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Mar 14 00:41:59.441943 update_engine[1447]: I20260314 00:41:59.441884 1447 update_attempter.cc:306] Processing Done. Mar 14 00:41:59.441943 update_engine[1447]: I20260314 00:41:59.441900 1447 update_attempter.cc:310] Error event sent. Mar 14 00:41:59.442298 update_engine[1447]: I20260314 00:41:59.441921 1447 update_check_scheduler.cc:74] Next update check in 46m5s Mar 14 00:41:59.443582 locksmithd[1484]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Mar 14 00:41:59.762830 sshd[4687]: pam_unix(sshd:session): session closed for user core Mar 14 00:41:59.776432 systemd[1]: sshd@34-10.0.0.131:22-10.0.0.1:54686.service: Deactivated successfully. Mar 14 00:41:59.789379 systemd[1]: session-35.scope: Deactivated successfully. Mar 14 00:41:59.794891 systemd-logind[1446]: Session 35 logged out. Waiting for processes to exit. Mar 14 00:41:59.800640 systemd-logind[1446]: Removed session 35.