Mar 2 13:06:23.496981 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 2 11:01:37 -00 2026 Mar 2 13:06:23.497016 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:06:23.497033 kernel: BIOS-provided physical RAM map: Mar 2 13:06:23.497042 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 2 13:06:23.497051 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 2 13:06:23.497059 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 2 13:06:23.497070 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 2 13:06:23.497079 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 2 13:06:23.497088 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 2 13:06:23.497102 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 2 13:06:23.497111 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 2 13:06:23.497120 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 2 13:06:23.497182 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 2 13:06:23.497193 kernel: NX (Execute Disable) protection: active Mar 2 13:06:23.497203 kernel: APIC: Static calls initialized Mar 2 13:06:23.497349 kernel: SMBIOS 2.8 present. Mar 2 13:06:23.497363 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 2 13:06:23.497373 kernel: Hypervisor detected: KVM Mar 2 13:06:23.497385 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 2 13:06:23.497394 kernel: kvm-clock: using sched offset of 11918132892 cycles Mar 2 13:06:23.497405 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 2 13:06:23.497416 kernel: tsc: Detected 2445.426 MHz processor Mar 2 13:06:23.497427 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 2 13:06:23.497437 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 2 13:06:23.497454 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 2 13:06:23.497465 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 2 13:06:23.497477 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 2 13:06:23.497902 kernel: Using GB pages for direct mapping Mar 2 13:06:23.497920 kernel: ACPI: Early table checksum verification disabled Mar 2 13:06:23.497930 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 2 13:06:23.497941 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:06:23.497951 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:06:23.497961 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:06:23.497978 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 2 13:06:23.497989 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:06:23.498000 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:06:23.498009 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:06:23.498020 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:06:23.498031 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 2 13:06:23.498041 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 2 13:06:23.498056 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 2 13:06:23.498069 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 2 13:06:23.498079 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 2 13:06:23.498088 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 2 13:06:23.498098 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 2 13:06:23.498107 kernel: No NUMA configuration found Mar 2 13:06:23.498117 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 2 13:06:23.498128 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 2 13:06:23.498144 kernel: Zone ranges: Mar 2 13:06:23.498155 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 2 13:06:23.498166 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 2 13:06:23.498177 kernel: Normal empty Mar 2 13:06:23.498189 kernel: Movable zone start for each node Mar 2 13:06:23.498200 kernel: Early memory node ranges Mar 2 13:06:23.498211 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 2 13:06:23.498222 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 2 13:06:23.498234 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 2 13:06:23.498347 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 13:06:23.498394 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 2 13:06:23.498404 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 2 13:06:23.498414 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 2 13:06:23.498426 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 2 13:06:23.498437 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 2 13:06:23.498448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 2 13:06:23.498459 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 2 13:06:23.498471 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 2 13:06:23.498546 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 2 13:06:23.498560 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 2 13:06:23.498571 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 2 13:06:23.498582 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 2 13:06:23.498594 kernel: TSC deadline timer available Mar 2 13:06:23.498604 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 2 13:06:23.498615 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 2 13:06:23.498627 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 2 13:06:23.498679 kernel: kvm-guest: setup PV sched yield Mar 2 13:06:23.498695 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 2 13:06:23.498705 kernel: Booting paravirtualized kernel on KVM Mar 2 13:06:23.498714 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 2 13:06:23.498724 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 2 13:06:23.498736 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 2 13:06:23.498747 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 2 13:06:23.498758 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 2 13:06:23.498768 kernel: kvm-guest: PV spinlocks enabled Mar 2 13:06:23.498780 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 2 13:06:23.498796 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:06:23.498808 kernel: random: crng init done Mar 2 13:06:23.498817 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 13:06:23.498827 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 13:06:23.498837 kernel: Fallback order for Node 0: 0 Mar 2 13:06:23.498846 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 2 13:06:23.498855 kernel: Policy zone: DMA32 Mar 2 13:06:23.498865 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 13:06:23.498879 kernel: Memory: 2434608K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136884K reserved, 0K cma-reserved) Mar 2 13:06:23.498890 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 2 13:06:23.498901 kernel: ftrace: allocating 37996 entries in 149 pages Mar 2 13:06:23.498912 kernel: ftrace: allocated 149 pages with 4 groups Mar 2 13:06:23.498923 kernel: Dynamic Preempt: voluntary Mar 2 13:06:23.498934 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 13:06:23.498947 kernel: rcu: RCU event tracing is enabled. Mar 2 13:06:23.498959 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 2 13:06:23.498970 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 13:06:23.498984 kernel: Rude variant of Tasks RCU enabled. Mar 2 13:06:23.498993 kernel: Tracing variant of Tasks RCU enabled. Mar 2 13:06:23.499003 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 13:06:23.499012 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 2 13:06:23.499065 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 2 13:06:23.499077 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 13:06:23.499089 kernel: Console: colour VGA+ 80x25 Mar 2 13:06:23.499099 kernel: printk: console [ttyS0] enabled Mar 2 13:06:23.499108 kernel: ACPI: Core revision 20230628 Mar 2 13:06:23.499118 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 2 13:06:23.499132 kernel: APIC: Switch to symmetric I/O mode setup Mar 2 13:06:23.499142 kernel: x2apic enabled Mar 2 13:06:23.499151 kernel: APIC: Switched APIC routing to: physical x2apic Mar 2 13:06:23.499161 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 2 13:06:23.499170 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 2 13:06:23.499180 kernel: kvm-guest: setup PV IPIs Mar 2 13:06:23.499190 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 2 13:06:23.499217 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 2 13:06:23.499229 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 2 13:06:23.499329 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 2 13:06:23.499341 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 2 13:06:23.499357 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 2 13:06:23.499370 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 2 13:06:23.499383 kernel: Spectre V2 : Mitigation: Retpolines Mar 2 13:06:23.499395 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 2 13:06:23.499407 kernel: Speculative Store Bypass: Vulnerable Mar 2 13:06:23.499422 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 2 13:06:23.500537 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 2 13:06:23.500555 kernel: active return thunk: srso_alias_return_thunk Mar 2 13:06:23.500566 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 2 13:06:23.500576 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 2 13:06:23.500587 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 2 13:06:23.500598 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 2 13:06:23.500609 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 2 13:06:23.500626 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 2 13:06:23.500638 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 2 13:06:23.500650 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 2 13:06:23.500660 kernel: Freeing SMP alternatives memory: 32K Mar 2 13:06:23.500673 kernel: pid_max: default: 32768 minimum: 301 Mar 2 13:06:23.500684 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 2 13:06:23.500698 kernel: landlock: Up and running. Mar 2 13:06:23.500709 kernel: SELinux: Initializing. Mar 2 13:06:23.500721 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:06:23.500738 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:06:23.500751 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 2 13:06:23.500762 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:06:23.500773 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:06:23.500785 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:06:23.500796 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 2 13:06:23.500808 kernel: signal: max sigframe size: 1776 Mar 2 13:06:23.500867 kernel: rcu: Hierarchical SRCU implementation. Mar 2 13:06:23.500879 kernel: rcu: Max phase no-delay instances is 400. Mar 2 13:06:23.500896 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 2 13:06:23.500906 kernel: smp: Bringing up secondary CPUs ... Mar 2 13:06:23.500916 kernel: smpboot: x86: Booting SMP configuration: Mar 2 13:06:23.500926 kernel: .... node #0, CPUs: #1 #2 #3 Mar 2 13:06:23.500936 kernel: smp: Brought up 1 node, 4 CPUs Mar 2 13:06:23.500946 kernel: smpboot: Max logical packages: 1 Mar 2 13:06:23.500957 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 2 13:06:23.500967 kernel: devtmpfs: initialized Mar 2 13:06:23.500977 kernel: x86/mm: Memory block size: 128MB Mar 2 13:06:23.500991 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 13:06:23.501002 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 2 13:06:23.501012 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 13:06:23.501022 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 13:06:23.501032 kernel: audit: initializing netlink subsys (disabled) Mar 2 13:06:23.501042 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 13:06:23.501053 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 2 13:06:23.501063 kernel: audit: type=2000 audit(1772456776.189:1): state=initialized audit_enabled=0 res=1 Mar 2 13:06:23.501073 kernel: cpuidle: using governor menu Mar 2 13:06:23.501087 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 13:06:23.501097 kernel: dca service started, version 1.12.1 Mar 2 13:06:23.501107 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 2 13:06:23.501119 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 2 13:06:23.501130 kernel: PCI: Using configuration type 1 for base access Mar 2 13:06:23.501144 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 2 13:06:23.501157 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 13:06:23.501169 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 13:06:23.501181 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 13:06:23.501198 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 13:06:23.501210 kernel: ACPI: Added _OSI(Module Device) Mar 2 13:06:23.501220 kernel: ACPI: Added _OSI(Processor Device) Mar 2 13:06:23.501230 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 13:06:23.501432 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 13:06:23.501445 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 2 13:06:23.501455 kernel: ACPI: Interpreter enabled Mar 2 13:06:23.501465 kernel: ACPI: PM: (supports S0 S3 S5) Mar 2 13:06:23.501476 kernel: ACPI: Using IOAPIC for interrupt routing Mar 2 13:06:23.501546 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 2 13:06:23.501558 kernel: PCI: Using E820 reservations for host bridge windows Mar 2 13:06:23.501568 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 2 13:06:23.501580 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 2 13:06:23.503807 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 2 13:06:23.504023 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 2 13:06:23.504203 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 2 13:06:23.504224 kernel: PCI host bridge to bus 0000:00 Mar 2 13:06:23.504779 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 2 13:06:23.504948 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 2 13:06:23.505098 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 2 13:06:23.505428 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 2 13:06:23.505653 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 2 13:06:23.505811 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 2 13:06:23.505973 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 2 13:06:23.506570 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 2 13:06:23.513059 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 2 13:06:23.513550 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 2 13:06:23.513750 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 2 13:06:23.513937 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 2 13:06:23.514111 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 2 13:06:23.514794 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 2 13:06:23.514970 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 2 13:06:23.515154 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 2 13:06:23.515611 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 2 13:06:23.515939 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 2 13:06:23.516109 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 2 13:06:23.516444 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 2 13:06:23.516714 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 2 13:06:23.517029 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 2 13:06:23.517205 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 2 13:06:23.517616 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 2 13:06:23.517798 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 2 13:06:23.517963 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 2 13:06:23.518397 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 2 13:06:23.518669 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 2 13:06:23.518874 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 11718 usecs Mar 2 13:06:23.520970 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 2 13:06:23.521150 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 2 13:06:23.521443 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 2 13:06:23.521834 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 2 13:06:23.522020 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 2 13:06:23.522037 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 2 13:06:23.522047 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 2 13:06:23.522057 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 2 13:06:23.522067 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 2 13:06:23.522077 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 2 13:06:23.522088 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 2 13:06:23.522097 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 2 13:06:23.522109 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 2 13:06:23.522127 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 2 13:06:23.522138 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 2 13:06:23.522148 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 2 13:06:23.522159 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 2 13:06:23.522169 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 2 13:06:23.522179 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 2 13:06:23.522189 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 2 13:06:23.522200 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 2 13:06:23.522210 kernel: iommu: Default domain type: Translated Mar 2 13:06:23.522224 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 2 13:06:23.522234 kernel: PCI: Using ACPI for IRQ routing Mar 2 13:06:23.522336 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 2 13:06:23.522347 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 2 13:06:23.522358 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 2 13:06:23.522612 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 2 13:06:23.522792 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 2 13:06:23.522956 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 2 13:06:23.522976 kernel: vgaarb: loaded Mar 2 13:06:23.522988 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 2 13:06:23.523000 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 2 13:06:23.523010 kernel: clocksource: Switched to clocksource kvm-clock Mar 2 13:06:23.523020 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 13:06:23.523031 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 13:06:23.523042 kernel: pnp: PnP ACPI init Mar 2 13:06:23.523791 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 2 13:06:23.523821 kernel: pnp: PnP ACPI: found 6 devices Mar 2 13:06:23.523834 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 2 13:06:23.523847 kernel: NET: Registered PF_INET protocol family Mar 2 13:06:23.523857 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 13:06:23.523870 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 13:06:23.523882 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 13:06:23.523894 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 13:06:23.523906 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 13:06:23.523916 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 13:06:23.523934 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:06:23.523946 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:06:23.523959 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 13:06:23.523969 kernel: NET: Registered PF_XDP protocol family Mar 2 13:06:23.524170 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 2 13:06:23.527806 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 2 13:06:23.527981 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 2 13:06:23.528166 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 2 13:06:23.528469 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 2 13:06:23.528710 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 2 13:06:23.528725 kernel: PCI: CLS 0 bytes, default 64 Mar 2 13:06:23.528736 kernel: Initialise system trusted keyrings Mar 2 13:06:23.528747 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 13:06:23.528758 kernel: Key type asymmetric registered Mar 2 13:06:23.528768 kernel: Asymmetric key parser 'x509' registered Mar 2 13:06:23.528778 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 2 13:06:23.528788 kernel: io scheduler mq-deadline registered Mar 2 13:06:23.528798 kernel: io scheduler kyber registered Mar 2 13:06:23.528814 kernel: io scheduler bfq registered Mar 2 13:06:23.528825 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 2 13:06:23.528837 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 2 13:06:23.528848 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 2 13:06:23.528859 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 2 13:06:23.528869 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 13:06:23.528878 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 2 13:06:23.528889 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 2 13:06:23.528899 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 2 13:06:23.528913 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 2 13:06:23.529190 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 2 13:06:23.529447 kernel: rtc_cmos 00:04: registered as rtc0 Mar 2 13:06:23.529465 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 2 13:06:23.529699 kernel: rtc_cmos 00:04: setting system clock to 2026-03-02T13:06:21 UTC (1772456781) Mar 2 13:06:23.530318 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 2 13:06:23.530335 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 2 13:06:23.530352 kernel: NET: Registered PF_INET6 protocol family Mar 2 13:06:23.530362 kernel: Segment Routing with IPv6 Mar 2 13:06:23.530373 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 13:06:23.530383 kernel: NET: Registered PF_PACKET protocol family Mar 2 13:06:23.530393 kernel: Key type dns_resolver registered Mar 2 13:06:23.530404 kernel: IPI shorthand broadcast: enabled Mar 2 13:06:23.530415 kernel: sched_clock: Marking stable (5377043249, 622647813)->(6561544723, -561853661) Mar 2 13:06:23.530425 kernel: registered taskstats version 1 Mar 2 13:06:23.530436 kernel: Loading compiled-in X.509 certificates Mar 2 13:06:23.530447 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: adc4961784537911a77ff0c4d6bd9b9639a51d45' Mar 2 13:06:23.530461 kernel: Key type .fscrypt registered Mar 2 13:06:23.530471 kernel: Key type fscrypt-provisioning registered Mar 2 13:06:23.530481 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 13:06:23.530549 kernel: ima: Allocated hash algorithm: sha1 Mar 2 13:06:23.530561 kernel: ima: No architecture policies found Mar 2 13:06:23.530571 kernel: clk: Disabling unused clocks Mar 2 13:06:23.530582 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 2 13:06:23.530593 kernel: Write protecting the kernel read-only data: 36864k Mar 2 13:06:23.530607 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 2 13:06:23.530618 kernel: Run /init as init process Mar 2 13:06:23.530629 kernel: with arguments: Mar 2 13:06:23.530639 kernel: /init Mar 2 13:06:23.530649 kernel: with environment: Mar 2 13:06:23.530660 kernel: HOME=/ Mar 2 13:06:23.530670 kernel: TERM=linux Mar 2 13:06:23.530683 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:06:23.530700 systemd[1]: Detected virtualization kvm. Mar 2 13:06:23.530711 systemd[1]: Detected architecture x86-64. Mar 2 13:06:23.530721 systemd[1]: Running in initrd. Mar 2 13:06:23.530732 systemd[1]: No hostname configured, using default hostname. Mar 2 13:06:23.530743 systemd[1]: Hostname set to . Mar 2 13:06:23.530754 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:06:23.530765 systemd[1]: Queued start job for default target initrd.target. Mar 2 13:06:23.530776 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:06:23.530791 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:06:23.530803 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 13:06:23.530814 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:06:23.530826 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 13:06:23.530837 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 13:06:23.530850 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 13:06:23.530861 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 13:06:23.530876 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:06:23.530887 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:06:23.530898 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:06:23.530909 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:06:23.530937 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:06:23.530952 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:06:23.530968 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:06:23.530982 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:06:23.530992 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 13:06:23.531004 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 13:06:23.531016 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:06:23.531027 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:06:23.531038 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:06:23.531051 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:06:23.531062 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 13:06:23.531078 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:06:23.531089 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 13:06:23.531101 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 13:06:23.531112 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:06:23.531124 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:06:23.531136 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:06:23.531180 systemd-journald[195]: Collecting audit messages is disabled. Mar 2 13:06:23.531220 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 13:06:23.531233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:06:23.531340 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 13:06:23.531361 systemd-journald[195]: Journal started Mar 2 13:06:23.531386 systemd-journald[195]: Runtime Journal (/run/log/journal/920b1f5464994dff80d5d3049d639834) is 6.0M, max 48.4M, 42.3M free. Mar 2 13:06:23.562795 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:06:23.595949 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:06:23.612765 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:06:23.621849 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:06:23.629918 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:06:23.736973 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:06:23.744025 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:06:23.793452 systemd-modules-load[196]: Inserted module 'overlay' Mar 2 13:06:24.204411 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 13:06:24.204474 kernel: Bridge firewalling registered Mar 2 13:06:23.870967 systemd-modules-load[196]: Inserted module 'br_netfilter' Mar 2 13:06:23.874877 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:06:24.255201 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:06:24.339949 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:06:24.342335 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:06:24.464101 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:06:24.550669 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 13:06:24.571587 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:06:24.599707 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:06:24.660054 dracut-cmdline[229]: dracut-dracut-053 Mar 2 13:06:24.670787 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:06:24.675080 systemd-resolved[232]: Positive Trust Anchors: Mar 2 13:06:24.675101 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:06:24.675147 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:06:24.697357 systemd-resolved[232]: Defaulting to hostname 'linux'. Mar 2 13:06:24.701031 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:06:24.726229 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:06:25.047489 kernel: SCSI subsystem initialized Mar 2 13:06:25.084755 kernel: Loading iSCSI transport class v2.0-870. Mar 2 13:06:25.136897 kernel: iscsi: registered transport (tcp) Mar 2 13:06:25.323893 kernel: iscsi: registered transport (qla4xxx) Mar 2 13:06:25.324124 kernel: QLogic iSCSI HBA Driver Mar 2 13:06:25.530896 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 13:06:25.561060 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 13:06:25.704036 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 13:06:25.704628 kernel: device-mapper: uevent: version 1.0.3 Mar 2 13:06:25.710057 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 2 13:06:26.027698 kernel: raid6: avx2x4 gen() 16182 MB/s Mar 2 13:06:26.051758 kernel: raid6: avx2x2 gen() 16705 MB/s Mar 2 13:06:26.071513 kernel: raid6: avx2x1 gen() 10536 MB/s Mar 2 13:06:26.071662 kernel: raid6: using algorithm avx2x2 gen() 16705 MB/s Mar 2 13:06:26.096825 kernel: raid6: .... xor() 17244 MB/s, rmw enabled Mar 2 13:06:26.101447 kernel: raid6: using avx2x2 recovery algorithm Mar 2 13:06:26.156806 kernel: xor: automatically using best checksumming function avx Mar 2 13:06:26.819832 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 13:06:26.865769 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:06:26.892131 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:06:26.923624 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 2 13:06:26.935961 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:06:26.968197 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 13:06:27.031708 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Mar 2 13:06:27.152161 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:06:27.184136 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:06:27.357428 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:06:27.392803 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 13:06:27.444414 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 13:06:27.566508 kernel: cryptd: max_cpu_qlen set to 1000 Mar 2 13:06:27.447874 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:06:27.451038 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:06:27.451447 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:06:27.557764 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 13:06:27.663285 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 2 13:06:27.707097 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 2 13:06:27.719816 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:06:27.779825 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 2 13:06:27.779874 kernel: GPT:9289727 != 19775487 Mar 2 13:06:27.780013 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 2 13:06:27.780035 kernel: GPT:9289727 != 19775487 Mar 2 13:06:27.780049 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 2 13:06:27.780064 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:06:27.721770 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:06:27.798676 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:06:27.798855 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:06:27.799363 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:06:27.822042 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:06:28.214537 kernel: libata version 3.00 loaded. Mar 2 13:06:28.221341 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:06:28.248418 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:06:28.286904 kernel: ahci 0000:00:1f.2: version 3.0 Mar 2 13:06:28.302994 kernel: AVX2 version of gcm_enc/dec engaged. Mar 2 13:06:28.303121 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 2 13:06:28.335402 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 2 13:06:28.339882 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 2 13:06:28.613562 kernel: hrtimer: interrupt took 7007409 ns Mar 2 13:06:28.635956 kernel: BTRFS: device fsid a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (468) Mar 2 13:06:28.656889 kernel: AES CTR mode by8 optimization enabled Mar 2 13:06:28.661719 kernel: scsi host0: ahci Mar 2 13:06:28.666438 kernel: scsi host1: ahci Mar 2 13:06:28.673214 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 2 13:06:29.354437 kernel: scsi host2: ahci Mar 2 13:06:29.355494 kernel: scsi host3: ahci Mar 2 13:06:29.355860 kernel: scsi host4: ahci Mar 2 13:06:29.370753 kernel: scsi host5: ahci Mar 2 13:06:29.371063 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 31 Mar 2 13:06:29.371083 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 31 Mar 2 13:06:29.371099 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 31 Mar 2 13:06:29.371114 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 31 Mar 2 13:06:29.371128 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 31 Mar 2 13:06:29.371143 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 31 Mar 2 13:06:29.371173 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (465) Mar 2 13:06:29.371188 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 2 13:06:29.371201 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 2 13:06:29.372216 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 2 13:06:29.372398 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 2 13:06:29.372414 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 2 13:06:29.372428 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 2 13:06:29.372443 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 2 13:06:29.372458 kernel: ata3.00: applying bridge limits Mar 2 13:06:29.372481 kernel: ata3.00: configured for UDMA/100 Mar 2 13:06:29.372494 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 2 13:06:29.373080 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 2 13:06:29.374228 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 13:06:29.374510 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 2 13:06:29.355785 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:06:29.464820 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 2 13:06:29.512562 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 2 13:06:29.527549 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 2 13:06:29.567731 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:06:29.634535 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 13:06:29.663929 disk-uuid[568]: Primary Header is updated. Mar 2 13:06:29.663929 disk-uuid[568]: Secondary Entries is updated. Mar 2 13:06:29.663929 disk-uuid[568]: Secondary Header is updated. Mar 2 13:06:29.708901 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:06:29.666707 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:06:29.802653 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:06:30.752843 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:06:30.764842 disk-uuid[569]: The operation has completed successfully. Mar 2 13:06:30.945396 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 13:06:30.946052 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 13:06:31.026085 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 13:06:31.060850 sh[595]: Success Mar 2 13:06:31.149522 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 2 13:06:31.301550 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 13:06:31.326475 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 13:06:31.354772 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 13:06:31.422191 kernel: BTRFS info (device dm-0): first mount of filesystem a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 Mar 2 13:06:31.422457 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:06:31.422479 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 2 13:06:31.429831 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 2 13:06:31.435506 kernel: BTRFS info (device dm-0): using free space tree Mar 2 13:06:31.480461 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 13:06:31.503720 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 13:06:31.531774 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 13:06:31.549892 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 13:06:31.618444 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:06:31.618578 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:06:31.618597 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:06:31.649375 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:06:31.678038 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 2 13:06:31.700364 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:06:31.726176 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 13:06:31.747114 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 13:06:32.816539 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:06:33.024842 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:06:33.050623 ignition[697]: Ignition 2.19.0 Mar 2 13:06:33.051058 ignition[697]: Stage: fetch-offline Mar 2 13:06:33.053497 ignition[697]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:06:33.053518 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:06:33.054885 ignition[697]: parsed url from cmdline: "" Mar 2 13:06:33.054894 ignition[697]: no config URL provided Mar 2 13:06:33.054904 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:06:33.054923 ignition[697]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:06:33.055061 ignition[697]: op(1): [started] loading QEMU firmware config module Mar 2 13:06:33.123597 systemd-networkd[782]: lo: Link UP Mar 2 13:06:33.055072 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 2 13:06:33.123604 systemd-networkd[782]: lo: Gained carrier Mar 2 13:06:33.109456 ignition[697]: op(1): [finished] loading QEMU firmware config module Mar 2 13:06:33.126515 systemd-networkd[782]: Enumeration completed Mar 2 13:06:33.127059 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:06:33.128539 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:06:33.128545 systemd-networkd[782]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:06:33.135844 systemd-networkd[782]: eth0: Link UP Mar 2 13:06:33.135853 systemd-networkd[782]: eth0: Gained carrier Mar 2 13:06:33.135871 systemd-networkd[782]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:06:33.151710 systemd[1]: Reached target network.target - Network. Mar 2 13:06:33.209824 systemd-networkd[782]: eth0: DHCPv4 address 10.0.0.56/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 13:06:34.158090 ignition[697]: parsing config with SHA512: 961a3e82e593c97e594cfe414fa714f35db218c4e0c7e9f402e1720215d894f7e6049481d4ba04c53c0de1fd8aed367c4bcd8bf00d402b634992cadd1d7f2719 Mar 2 13:06:34.340404 unknown[697]: fetched base config from "system" Mar 2 13:06:34.341031 unknown[697]: fetched user config from "qemu" Mar 2 13:06:34.342457 ignition[697]: fetch-offline: fetch-offline passed Mar 2 13:06:34.360949 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:06:34.345979 ignition[697]: Ignition finished successfully Mar 2 13:06:34.399838 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 2 13:06:34.412183 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 13:06:34.502090 systemd-networkd[782]: eth0: Gained IPv6LL Mar 2 13:06:34.634042 ignition[787]: Ignition 2.19.0 Mar 2 13:06:34.634553 ignition[787]: Stage: kargs Mar 2 13:06:34.636351 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:06:34.636376 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:06:34.640128 ignition[787]: kargs: kargs passed Mar 2 13:06:34.640206 ignition[787]: Ignition finished successfully Mar 2 13:06:34.701634 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 13:06:34.724961 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 13:06:34.803708 ignition[795]: Ignition 2.19.0 Mar 2 13:06:34.803814 ignition[795]: Stage: disks Mar 2 13:06:34.804540 ignition[795]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:06:34.804559 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:06:34.810635 ignition[795]: disks: disks passed Mar 2 13:06:34.810780 ignition[795]: Ignition finished successfully Mar 2 13:06:34.842856 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 13:06:34.866106 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 13:06:34.882886 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 13:06:34.893564 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:06:34.899403 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:06:34.929632 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:06:34.948858 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 13:06:35.027795 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 2 13:06:35.041103 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 13:06:35.078645 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 13:06:35.654064 kernel: EXT4-fs (vda9): mounted filesystem 84e86976-7918-44d3-a6f5-d0f90ce6c152 r/w with ordered data mode. Quota mode: none. Mar 2 13:06:35.655842 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 13:06:35.662441 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 13:06:35.702525 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:06:35.714449 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 13:06:35.734763 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 2 13:06:35.766616 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (814) Mar 2 13:06:35.740321 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 13:06:35.740368 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:06:35.798030 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:06:35.798064 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:06:35.798076 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:06:35.750051 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 13:06:35.833350 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:06:35.772592 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 13:06:35.851007 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:06:35.998465 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 13:06:36.028741 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Mar 2 13:06:36.069642 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 13:06:36.088773 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 13:06:36.516229 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 13:06:36.539846 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 13:06:36.565056 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 13:06:36.581590 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 13:06:36.616081 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:06:36.750206 ignition[926]: INFO : Ignition 2.19.0 Mar 2 13:06:36.750206 ignition[926]: INFO : Stage: mount Mar 2 13:06:36.750206 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:06:36.750206 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:06:36.795015 ignition[926]: INFO : mount: mount passed Mar 2 13:06:36.795015 ignition[926]: INFO : Ignition finished successfully Mar 2 13:06:36.819766 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 13:06:36.851893 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 13:06:36.867053 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 13:06:36.897072 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:06:36.959641 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Mar 2 13:06:36.976468 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:06:36.976550 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:06:36.976570 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:06:37.004756 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:06:37.012974 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:06:37.123341 ignition[957]: INFO : Ignition 2.19.0 Mar 2 13:06:37.123341 ignition[957]: INFO : Stage: files Mar 2 13:06:37.123341 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:06:37.123341 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:06:37.165581 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Mar 2 13:06:37.165581 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 13:06:37.165581 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 13:06:37.191655 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 13:06:37.200481 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 13:06:37.209656 unknown[957]: wrote ssh authorized keys file for user: core Mar 2 13:06:37.219406 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 13:06:37.219406 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 13:06:37.219406 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 2 13:06:37.354908 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 13:06:37.550005 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 13:06:37.550005 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 2 13:06:37.550005 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 2 13:06:37.749861 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 2 13:06:37.917198 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 2 13:06:37.917198 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 2 13:06:37.948809 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 13:06:37.948809 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:06:37.948809 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:06:37.948809 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:06:37.948809 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:06:37.948809 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:06:37.948809 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:06:37.948809 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:06:37.948809 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:06:37.948809 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 13:06:37.948809 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 13:06:37.948809 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 13:06:37.948809 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Mar 2 13:06:38.158827 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 2 13:06:38.837591 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Mar 2 13:06:38.837591 ignition[957]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 2 13:06:38.867006 ignition[957]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:06:38.867006 ignition[957]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:06:38.867006 ignition[957]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 2 13:06:38.867006 ignition[957]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 2 13:06:38.867006 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 13:06:38.867006 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 13:06:38.867006 ignition[957]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 2 13:06:38.867006 ignition[957]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 2 13:06:39.044647 ignition[957]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 13:06:39.058549 ignition[957]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 13:06:39.069596 ignition[957]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 2 13:06:39.069596 ignition[957]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 2 13:06:39.069596 ignition[957]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 13:06:39.069596 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:06:39.069596 ignition[957]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:06:39.069596 ignition[957]: INFO : files: files passed Mar 2 13:06:39.069596 ignition[957]: INFO : Ignition finished successfully Mar 2 13:06:39.066786 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 13:06:39.128854 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 13:06:39.143533 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 13:06:39.160972 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 13:06:39.204576 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Mar 2 13:06:39.161217 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 13:06:39.219372 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:06:39.219372 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:06:39.184954 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:06:39.255476 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:06:39.193102 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 13:06:39.219707 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 13:06:39.286914 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 13:06:39.287990 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 13:06:39.297366 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 13:06:39.297491 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 13:06:39.305138 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 13:06:39.314028 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 13:06:39.354806 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:06:39.395525 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 13:06:39.415865 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:06:39.424015 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:06:39.437616 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 13:06:39.451188 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 13:06:39.451534 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:06:39.467822 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 13:06:39.477399 systemd[1]: Stopped target basic.target - Basic System. Mar 2 13:06:39.489482 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 13:06:39.500898 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:06:39.513162 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 13:06:39.520336 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 13:06:39.532516 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:06:39.539487 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 13:06:39.551541 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 13:06:39.563940 systemd[1]: Stopped target swap.target - Swaps. Mar 2 13:06:39.573993 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 13:06:39.574212 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:06:39.577144 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:06:39.583786 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:06:39.590961 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 13:06:39.591398 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:06:39.747080 ignition[1011]: INFO : Ignition 2.19.0 Mar 2 13:06:39.747080 ignition[1011]: INFO : Stage: umount Mar 2 13:06:39.747080 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:06:39.747080 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:06:39.747080 ignition[1011]: INFO : umount: umount passed Mar 2 13:06:39.747080 ignition[1011]: INFO : Ignition finished successfully Mar 2 13:06:39.591822 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 13:06:39.592080 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 13:06:39.598225 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 13:06:39.598563 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:06:39.600483 systemd[1]: Stopped target paths.target - Path Units. Mar 2 13:06:39.601212 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 13:06:39.605471 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:06:39.613421 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 13:06:39.614021 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 13:06:39.621215 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 13:06:39.621490 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:06:39.622181 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 13:06:39.622445 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:06:39.623225 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 13:06:39.623500 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:06:39.624383 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 13:06:39.624610 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 13:06:39.688044 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 13:06:39.697388 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 13:06:39.697880 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:06:39.731235 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 13:06:39.747030 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 13:06:39.747705 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:06:39.767521 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 13:06:39.767843 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:06:39.791007 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 13:06:39.793052 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 13:06:39.794458 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 13:06:39.813867 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 13:06:39.814101 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 13:06:39.827382 systemd[1]: Stopped target network.target - Network. Mar 2 13:06:39.838139 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 13:06:39.838380 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 13:06:39.855724 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 13:06:39.855901 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 13:06:39.861898 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 13:06:39.861966 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 13:06:39.870920 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 13:06:39.871017 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 13:06:39.879698 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 13:06:39.892869 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 13:06:39.909977 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 13:06:39.910361 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 13:06:39.925182 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 13:06:39.925406 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:06:39.931691 systemd-networkd[782]: eth0: DHCPv6 lease lost Mar 2 13:06:39.949334 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 13:06:39.949573 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 13:06:40.436597 systemd-journald[195]: Received SIGTERM from PID 1 (systemd). Mar 2 13:06:39.971667 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 13:06:39.971938 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 13:06:39.982924 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 13:06:39.983002 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:06:39.994572 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 13:06:39.994646 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 13:06:40.029081 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 13:06:40.046080 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 13:06:40.046357 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:06:40.059379 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:06:40.059488 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:06:40.066061 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 13:06:40.066151 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 13:06:40.073496 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:06:40.103892 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 13:06:40.104344 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 13:06:40.115526 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 13:06:40.115934 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:06:40.131401 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 13:06:40.131525 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 13:06:40.144450 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 13:06:40.144541 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:06:40.161348 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 13:06:40.161467 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:06:40.173710 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 13:06:40.174209 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 13:06:40.189493 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:06:40.189617 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:06:40.233580 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 13:06:40.251604 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 13:06:40.251721 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:06:40.260221 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:06:40.260419 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:06:40.279127 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 13:06:40.279459 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 13:06:40.290044 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 13:06:40.335039 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 13:06:40.359071 systemd[1]: Switching root. Mar 2 13:06:40.636161 systemd-journald[195]: Journal stopped Mar 2 13:06:43.268187 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 13:06:43.268408 kernel: SELinux: policy capability open_perms=1 Mar 2 13:06:43.268432 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 13:06:43.268455 kernel: SELinux: policy capability always_check_network=0 Mar 2 13:06:43.268471 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 13:06:43.268498 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 13:06:43.268528 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 13:06:43.268547 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 13:06:43.268567 kernel: audit: type=1403 audit(1772456800.853:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 13:06:43.268586 systemd[1]: Successfully loaded SELinux policy in 134.544ms. Mar 2 13:06:43.268628 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.303ms. Mar 2 13:06:43.268651 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:06:43.268672 systemd[1]: Detected virtualization kvm. Mar 2 13:06:43.268690 systemd[1]: Detected architecture x86-64. Mar 2 13:06:43.268709 systemd[1]: Detected first boot. Mar 2 13:06:43.268728 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:06:43.268748 zram_generator::config[1056]: No configuration found. Mar 2 13:06:43.268768 systemd[1]: Populated /etc with preset unit settings. Mar 2 13:06:43.268862 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 2 13:06:43.268886 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 2 13:06:43.268906 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 2 13:06:43.268924 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 13:06:43.268940 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 13:06:43.268958 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 13:06:43.268975 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 13:06:43.268992 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 13:06:43.269010 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 13:06:43.269035 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 13:06:43.269052 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 13:06:43.269069 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:06:43.269093 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:06:43.269114 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 13:06:43.269134 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 13:06:43.269152 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 13:06:43.269171 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:06:43.269192 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 2 13:06:43.269215 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:06:43.269235 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 2 13:06:43.269391 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 2 13:06:43.269412 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 2 13:06:43.269434 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 13:06:43.269458 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:06:43.269477 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:06:43.269503 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:06:43.269522 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:06:43.269542 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 13:06:43.269562 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 13:06:43.269581 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:06:43.269601 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:06:43.269618 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:06:43.269638 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 13:06:43.269656 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 13:06:43.269675 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 13:06:43.269699 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 13:06:43.269719 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:06:43.269738 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 13:06:43.269756 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 13:06:43.269775 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 13:06:43.269865 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 13:06:43.269891 systemd[1]: Reached target machines.target - Containers. Mar 2 13:06:43.269910 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 13:06:43.269936 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:06:43.269960 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:06:43.269980 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 13:06:43.270000 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:06:43.270019 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:06:43.270037 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:06:43.270055 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 13:06:43.270073 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:06:43.270092 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 13:06:43.270117 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 2 13:06:43.270138 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 2 13:06:43.270156 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 2 13:06:43.270174 systemd[1]: Stopped systemd-fsck-usr.service. Mar 2 13:06:43.270194 kernel: fuse: init (API version 7.39) Mar 2 13:06:43.270212 kernel: loop: module loaded Mar 2 13:06:43.270230 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:06:43.270367 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:06:43.270424 systemd-journald[1140]: Collecting audit messages is disabled. Mar 2 13:06:43.270469 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 13:06:43.270491 systemd-journald[1140]: Journal started Mar 2 13:06:43.270522 systemd-journald[1140]: Runtime Journal (/run/log/journal/920b1f5464994dff80d5d3049d639834) is 6.0M, max 48.4M, 42.3M free. Mar 2 13:06:42.100139 systemd[1]: Queued start job for default target multi-user.target. Mar 2 13:06:42.141441 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 2 13:06:42.142689 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 2 13:06:42.143455 systemd[1]: systemd-journald.service: Consumed 3.166s CPU time. Mar 2 13:06:43.279405 kernel: ACPI: bus type drm_connector registered Mar 2 13:06:43.304980 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 13:06:43.322408 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:06:43.332871 systemd[1]: verity-setup.service: Deactivated successfully. Mar 2 13:06:43.332947 systemd[1]: Stopped verity-setup.service. Mar 2 13:06:43.347406 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:06:43.361974 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:06:43.363896 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 13:06:43.371615 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 13:06:43.380782 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 13:06:43.387960 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 13:06:43.396509 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 13:06:43.405100 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 13:06:43.413413 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 13:06:43.422420 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:06:43.432997 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 13:06:43.433506 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 13:06:43.443638 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:06:43.444436 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:06:43.455502 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:06:43.456754 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:06:43.466427 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:06:43.466769 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:06:43.480969 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 13:06:43.481592 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 13:06:43.491711 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:06:43.492180 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:06:43.501475 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:06:43.510772 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 13:06:43.523699 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 13:06:43.533695 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:06:43.565727 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 13:06:43.587637 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 13:06:43.596754 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 13:06:43.603375 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 13:06:43.603465 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:06:43.612066 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 2 13:06:43.623889 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 13:06:43.637014 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 13:06:43.645635 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:06:43.658980 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 13:06:43.672737 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 13:06:43.681527 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:06:43.685111 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 13:06:43.695593 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:06:43.700627 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:06:43.712221 systemd-journald[1140]: Time spent on flushing to /var/log/journal/920b1f5464994dff80d5d3049d639834 is 62.115ms for 945 entries. Mar 2 13:06:43.712221 systemd-journald[1140]: System Journal (/var/log/journal/920b1f5464994dff80d5d3049d639834) is 8.0M, max 195.6M, 187.6M free. Mar 2 13:06:43.835500 systemd-journald[1140]: Received client request to flush runtime journal. Mar 2 13:06:43.835571 kernel: loop0: detected capacity change from 0 to 219192 Mar 2 13:06:43.715115 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 13:06:43.745062 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 13:06:43.763469 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 2 13:06:43.777218 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 13:06:43.785749 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 13:06:43.802607 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 13:06:43.829629 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 13:06:43.841414 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 13:06:43.877706 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 13:06:43.903398 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 13:06:43.912236 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 2 13:06:43.926179 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:06:43.942558 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 2 13:06:43.953148 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 13:06:43.982499 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:06:43.992785 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 13:06:43.993750 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 2 13:06:43.999421 kernel: loop1: detected capacity change from 0 to 142488 Mar 2 13:06:44.091103 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Mar 2 13:06:44.091129 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Mar 2 13:06:44.110025 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:06:44.141559 kernel: loop2: detected capacity change from 0 to 140768 Mar 2 13:06:44.278446 kernel: loop3: detected capacity change from 0 to 219192 Mar 2 13:06:44.366468 kernel: loop4: detected capacity change from 0 to 142488 Mar 2 13:06:44.446891 kernel: loop5: detected capacity change from 0 to 140768 Mar 2 13:06:44.520884 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 2 13:06:44.525992 (sd-merge)[1194]: Merged extensions into '/usr'. Mar 2 13:06:44.541534 systemd[1]: Reloading requested from client PID 1171 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 13:06:44.541550 systemd[1]: Reloading... Mar 2 13:06:44.679464 zram_generator::config[1224]: No configuration found. Mar 2 13:06:44.915229 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:06:44.985968 ldconfig[1166]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 13:06:45.002793 systemd[1]: Reloading finished in 460 ms. Mar 2 13:06:45.054994 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 13:06:45.062705 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 13:06:45.070998 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 13:06:45.107461 systemd[1]: Starting ensure-sysext.service... Mar 2 13:06:45.116509 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:06:45.125127 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:06:45.134170 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Mar 2 13:06:45.134192 systemd[1]: Reloading... Mar 2 13:06:45.156428 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 13:06:45.157006 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 13:06:45.159420 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 13:06:45.159770 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 2 13:06:45.159950 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Mar 2 13:06:45.166460 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:06:45.166478 systemd-tmpfiles[1259]: Skipping /boot Mar 2 13:06:45.187045 systemd-tmpfiles[1259]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:06:45.187110 systemd-tmpfiles[1259]: Skipping /boot Mar 2 13:06:45.196387 systemd-udevd[1260]: Using default interface naming scheme 'v255'. Mar 2 13:06:45.241394 zram_generator::config[1289]: No configuration found. Mar 2 13:06:45.375364 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1338) Mar 2 13:06:45.445440 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:06:45.480413 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 2 13:06:45.494355 kernel: ACPI: button: Power Button [PWRF] Mar 2 13:06:45.565077 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 2 13:06:45.565607 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 2 13:06:45.565648 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 2 13:06:45.573506 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 2 13:06:45.582730 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:06:45.590741 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 2 13:06:45.591050 systemd[1]: Reloading finished in 455 ms. Mar 2 13:06:45.634477 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:06:45.663980 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:06:45.790042 systemd[1]: Finished ensure-sysext.service. Mar 2 13:06:45.806632 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:06:46.030187 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:06:46.041892 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 13:06:46.050498 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:06:46.056873 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:06:46.069052 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:06:46.090705 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:06:46.106689 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:06:46.115515 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:06:46.120356 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 13:06:46.128483 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 13:06:46.132047 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 13:06:46.144372 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:06:46.165980 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:06:46.178889 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 2 13:06:46.190499 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 13:06:46.202127 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:06:46.208519 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:06:46.216939 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:06:46.217477 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:06:46.218467 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:06:46.232476 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:06:46.241163 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:06:46.241732 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:06:46.256885 kernel: kvm_amd: TSC scaling supported Mar 2 13:06:46.257049 kernel: kvm_amd: Nested Virtualization enabled Mar 2 13:06:46.257081 kernel: kvm_amd: Nested Paging enabled Mar 2 13:06:46.262503 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 2 13:06:46.262593 kernel: kvm_amd: PMU virtualization is disabled Mar 2 13:06:46.275684 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:06:46.278373 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:06:46.288470 augenrules[1383]: No rules Mar 2 13:06:46.337570 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 13:06:46.360969 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:06:46.376530 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 13:06:46.418739 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:06:46.419730 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:06:46.432102 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 13:06:46.444991 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 13:06:46.460515 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 13:06:46.468815 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 13:06:46.471448 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 13:06:46.482892 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 13:06:46.679227 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 13:06:46.709437 kernel: EDAC MC: Ver: 3.0.0 Mar 2 13:06:46.844106 systemd-networkd[1374]: lo: Link UP Mar 2 13:06:46.844172 systemd-networkd[1374]: lo: Gained carrier Mar 2 13:06:46.849937 systemd-networkd[1374]: Enumeration completed Mar 2 13:06:46.852183 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:06:46.852191 systemd-networkd[1374]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:06:46.855396 systemd-networkd[1374]: eth0: Link UP Mar 2 13:06:46.855408 systemd-networkd[1374]: eth0: Gained carrier Mar 2 13:06:46.855428 systemd-networkd[1374]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:06:46.878624 systemd-resolved[1376]: Positive Trust Anchors: Mar 2 13:06:46.878694 systemd-resolved[1376]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:06:46.878744 systemd-resolved[1376]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:06:46.885438 systemd-networkd[1374]: eth0: DHCPv4 address 10.0.0.56/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 13:06:46.886727 systemd-timesyncd[1377]: Network configuration changed, trying to establish connection. Mar 2 13:06:47.337152 systemd-timesyncd[1377]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 2 13:06:47.337484 systemd-timesyncd[1377]: Initial clock synchronization to Mon 2026-03-02 13:06:47.336752 UTC. Mar 2 13:06:47.337781 systemd-resolved[1376]: Defaulting to hostname 'linux'. Mar 2 13:06:47.503050 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 2 13:06:47.503491 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:06:47.512781 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:06:47.514111 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 2 13:06:47.526745 systemd[1]: Reached target network.target - Network. Mar 2 13:06:47.534189 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:06:47.542765 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 13:06:47.572711 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 2 13:06:47.583103 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 13:06:47.592733 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:06:47.611309 lvm[1415]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:06:47.663033 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 2 13:06:47.674052 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:06:47.689106 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:06:47.697954 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 13:06:47.707936 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 13:06:47.716318 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 13:06:47.725535 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 13:06:47.735509 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 13:06:47.744264 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 13:06:47.744668 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:06:47.753015 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:06:47.763591 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 13:06:47.779976 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 13:06:47.803292 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 13:06:47.819238 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 2 13:06:47.838812 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 13:06:47.852798 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:06:47.855979 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:06:47.863042 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:06:47.868699 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:06:47.868789 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:06:47.871607 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 13:06:47.887117 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 13:06:47.898257 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 13:06:47.917095 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 13:06:47.924969 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 13:06:47.928490 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 13:06:47.939756 jq[1424]: false Mar 2 13:06:47.948142 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 13:06:47.958382 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 13:06:47.971217 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 13:06:47.980077 extend-filesystems[1425]: Found loop3 Mar 2 13:06:47.980077 extend-filesystems[1425]: Found loop4 Mar 2 13:06:47.980077 extend-filesystems[1425]: Found loop5 Mar 2 13:06:47.980077 extend-filesystems[1425]: Found sr0 Mar 2 13:06:47.980077 extend-filesystems[1425]: Found vda Mar 2 13:06:47.980077 extend-filesystems[1425]: Found vda1 Mar 2 13:06:47.980077 extend-filesystems[1425]: Found vda2 Mar 2 13:06:47.980077 extend-filesystems[1425]: Found vda3 Mar 2 13:06:47.980077 extend-filesystems[1425]: Found usr Mar 2 13:06:48.067160 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1328) Mar 2 13:06:47.988191 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 13:06:48.067511 extend-filesystems[1425]: Found vda4 Mar 2 13:06:48.067511 extend-filesystems[1425]: Found vda6 Mar 2 13:06:48.067511 extend-filesystems[1425]: Found vda7 Mar 2 13:06:48.067511 extend-filesystems[1425]: Found vda9 Mar 2 13:06:48.067511 extend-filesystems[1425]: Checking size of /dev/vda9 Mar 2 13:06:48.067511 extend-filesystems[1425]: Resized partition /dev/vda9 Mar 2 13:06:48.129815 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 2 13:06:47.997062 dbus-daemon[1423]: [system] SELinux support is enabled Mar 2 13:06:48.004425 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 13:06:48.131138 extend-filesystems[1443]: resize2fs 1.47.1 (20-May-2024) Mar 2 13:06:48.005679 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 13:06:48.164221 update_engine[1435]: I20260302 13:06:48.126510 1435 main.cc:92] Flatcar Update Engine starting Mar 2 13:06:48.164221 update_engine[1435]: I20260302 13:06:48.149974 1435 update_check_scheduler.cc:74] Next update check in 7m32s Mar 2 13:06:48.012079 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 13:06:48.016227 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 13:06:48.179334 jq[1436]: true Mar 2 13:06:48.035016 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 13:06:48.093691 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 2 13:06:48.102340 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 13:06:48.102800 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 13:06:48.130629 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 13:06:48.131276 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 13:06:48.147519 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 13:06:48.149014 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 13:06:48.216636 (ntainerd)[1451]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 13:06:48.297544 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 2 13:06:48.297646 tar[1449]: linux-amd64/LICENSE Mar 2 13:06:48.251351 systemd[1]: Started update-engine.service - Update Engine. Mar 2 13:06:48.310692 jq[1450]: true Mar 2 13:06:48.311650 tar[1449]: linux-amd64/helm Mar 2 13:06:48.259372 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 13:06:48.259419 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 13:06:48.280076 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 13:06:48.280118 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 13:06:48.302984 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 13:06:48.311752 systemd-logind[1433]: Watching system buttons on /dev/input/event1 (Power Button) Mar 2 13:06:48.311789 systemd-logind[1433]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 13:06:48.316046 extend-filesystems[1443]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 2 13:06:48.316046 extend-filesystems[1443]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 2 13:06:48.316046 extend-filesystems[1443]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 2 13:06:48.313506 systemd-logind[1433]: New seat seat0. Mar 2 13:06:48.419712 extend-filesystems[1425]: Resized filesystem in /dev/vda9 Mar 2 13:06:48.329309 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 13:06:48.364042 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 13:06:48.364415 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 13:06:48.376011 locksmithd[1461]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 13:06:48.387381 systemd-networkd[1374]: eth0: Gained IPv6LL Mar 2 13:06:48.410054 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 13:06:48.479264 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 13:06:48.493333 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 13:06:48.521366 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 2 13:06:48.545109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:06:48.568645 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 13:06:48.577978 sshd_keygen[1441]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 13:06:48.596790 bash[1493]: Updated "/home/core/.ssh/authorized_keys" Mar 2 13:06:48.598351 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 13:06:48.621209 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 13:06:48.649593 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 2 13:06:48.650565 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 2 13:06:48.661610 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 13:06:48.679069 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 13:06:48.724498 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 13:06:48.733813 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 13:06:48.754104 systemd[1]: Started sshd@0-10.0.0.56:22-10.0.0.1:54882.service - OpenSSH per-connection server daemon (10.0.0.1:54882). Mar 2 13:06:48.784799 containerd[1451]: time="2026-03-02T13:06:48.784313778Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 2 13:06:48.808801 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 13:06:48.811116 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 13:06:48.855539 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 13:06:48.867563 containerd[1451]: time="2026-03-02T13:06:48.867203650Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:06:48.876133 containerd[1451]: time="2026-03-02T13:06:48.875956538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:06:48.876133 containerd[1451]: time="2026-03-02T13:06:48.876067766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 2 13:06:48.876133 containerd[1451]: time="2026-03-02T13:06:48.876097330Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 2 13:06:48.876571 containerd[1451]: time="2026-03-02T13:06:48.876375920Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 2 13:06:48.876571 containerd[1451]: time="2026-03-02T13:06:48.876521052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 2 13:06:48.876666 containerd[1451]: time="2026-03-02T13:06:48.876648349Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:06:48.876704 containerd[1451]: time="2026-03-02T13:06:48.876668216Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:06:48.877249 containerd[1451]: time="2026-03-02T13:06:48.877052854Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:06:48.877249 containerd[1451]: time="2026-03-02T13:06:48.877221158Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 2 13:06:48.877249 containerd[1451]: time="2026-03-02T13:06:48.877243169Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:06:48.877364 containerd[1451]: time="2026-03-02T13:06:48.877256333Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 2 13:06:48.877396 containerd[1451]: time="2026-03-02T13:06:48.877376057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:06:48.878367 containerd[1451]: time="2026-03-02T13:06:48.878237866Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:06:48.878670 containerd[1451]: time="2026-03-02T13:06:48.878574013Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:06:48.891997 containerd[1451]: time="2026-03-02T13:06:48.889436179Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 2 13:06:48.891997 containerd[1451]: time="2026-03-02T13:06:48.890056357Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 2 13:06:48.891997 containerd[1451]: time="2026-03-02T13:06:48.890148239Z" level=info msg="metadata content store policy set" policy=shared Mar 2 13:06:48.913338 containerd[1451]: time="2026-03-02T13:06:48.910541954Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 2 13:06:48.918638 containerd[1451]: time="2026-03-02T13:06:48.917160548Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 2 13:06:48.918638 containerd[1451]: time="2026-03-02T13:06:48.917809750Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 2 13:06:48.918638 containerd[1451]: time="2026-03-02T13:06:48.917920326Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 2 13:06:48.918638 containerd[1451]: time="2026-03-02T13:06:48.917943119Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 2 13:06:48.918638 containerd[1451]: time="2026-03-02T13:06:48.918180702Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 2 13:06:48.918964 containerd[1451]: time="2026-03-02T13:06:48.918653765Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 2 13:06:48.918964 containerd[1451]: time="2026-03-02T13:06:48.918805198Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 2 13:06:48.923282 containerd[1451]: time="2026-03-02T13:06:48.919078358Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 2 13:06:48.923282 containerd[1451]: time="2026-03-02T13:06:48.920644882Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 2 13:06:48.923282 containerd[1451]: time="2026-03-02T13:06:48.920976290Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 2 13:06:48.923282 containerd[1451]: time="2026-03-02T13:06:48.921213733Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 2 13:06:48.924563 containerd[1451]: time="2026-03-02T13:06:48.923321648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 2 13:06:48.924563 containerd[1451]: time="2026-03-02T13:06:48.924321311Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 2 13:06:48.924563 containerd[1451]: time="2026-03-02T13:06:48.924345577Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 2 13:06:48.924563 containerd[1451]: time="2026-03-02T13:06:48.924361798Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 2 13:06:48.924563 containerd[1451]: time="2026-03-02T13:06:48.924375973Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 2 13:06:48.924563 containerd[1451]: time="2026-03-02T13:06:48.924387175Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 2 13:06:48.924563 containerd[1451]: time="2026-03-02T13:06:48.924422881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 2 13:06:48.924563 containerd[1451]: time="2026-03-02T13:06:48.924441066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 2 13:06:48.924563 containerd[1451]: time="2026-03-02T13:06:48.924521676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 2 13:06:48.924563 containerd[1451]: time="2026-03-02T13:06:48.924533908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 2 13:06:48.924563 containerd[1451]: time="2026-03-02T13:06:48.924546121Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 2 13:06:48.924563 containerd[1451]: time="2026-03-02T13:06:48.924560048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 2 13:06:48.924563 containerd[1451]: time="2026-03-02T13:06:48.924571699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 2 13:06:48.925121 containerd[1451]: time="2026-03-02T13:06:48.924582960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 2 13:06:48.925121 containerd[1451]: time="2026-03-02T13:06:48.924594592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 2 13:06:48.925121 containerd[1451]: time="2026-03-02T13:06:48.924608197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 2 13:06:48.925121 containerd[1451]: time="2026-03-02T13:06:48.924619228Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 2 13:06:48.925121 containerd[1451]: time="2026-03-02T13:06:48.924635118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 2 13:06:48.925121 containerd[1451]: time="2026-03-02T13:06:48.924647491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 2 13:06:48.925121 containerd[1451]: time="2026-03-02T13:06:48.924661857Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 2 13:06:48.925121 containerd[1451]: time="2026-03-02T13:06:48.924683157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 2 13:06:48.925121 containerd[1451]: time="2026-03-02T13:06:48.924694068Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 2 13:06:48.925121 containerd[1451]: time="2026-03-02T13:06:48.924704367Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 2 13:06:48.925121 containerd[1451]: time="2026-03-02T13:06:48.924807559Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 2 13:06:48.925121 containerd[1451]: time="2026-03-02T13:06:48.924936730Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 2 13:06:48.925121 containerd[1451]: time="2026-03-02T13:06:48.924950266Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 2 13:06:48.925725 containerd[1451]: time="2026-03-02T13:06:48.924961827Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 2 13:06:48.925725 containerd[1451]: time="2026-03-02T13:06:48.924971095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 2 13:06:48.930263 containerd[1451]: time="2026-03-02T13:06:48.926948408Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 2 13:06:48.930263 containerd[1451]: time="2026-03-02T13:06:48.927028136Z" level=info msg="NRI interface is disabled by configuration." Mar 2 13:06:48.930263 containerd[1451]: time="2026-03-02T13:06:48.927051950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 2 13:06:48.930423 containerd[1451]: time="2026-03-02T13:06:48.927426821Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 2 13:06:48.930423 containerd[1451]: time="2026-03-02T13:06:48.927595876Z" level=info msg="Connect containerd service" Mar 2 13:06:48.930423 containerd[1451]: time="2026-03-02T13:06:48.927643355Z" level=info msg="using legacy CRI server" Mar 2 13:06:48.930423 containerd[1451]: time="2026-03-02T13:06:48.927663863Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 13:06:48.930423 containerd[1451]: time="2026-03-02T13:06:48.927786922Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 2 13:06:48.936990 containerd[1451]: time="2026-03-02T13:06:48.934182293Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:06:48.936990 containerd[1451]: time="2026-03-02T13:06:48.934679162Z" level=info msg="Start subscribing containerd event" Mar 2 13:06:48.936990 containerd[1451]: time="2026-03-02T13:06:48.935651435Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 13:06:48.936990 containerd[1451]: time="2026-03-02T13:06:48.935731795Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 13:06:48.937197 containerd[1451]: time="2026-03-02T13:06:48.937036513Z" level=info msg="Start recovering state" Mar 2 13:06:48.937363 containerd[1451]: time="2026-03-02T13:06:48.937249871Z" level=info msg="Start event monitor" Mar 2 13:06:48.938031 containerd[1451]: time="2026-03-02T13:06:48.937781162Z" level=info msg="Start snapshots syncer" Mar 2 13:06:48.939120 containerd[1451]: time="2026-03-02T13:06:48.939007832Z" level=info msg="Start cni network conf syncer for default" Mar 2 13:06:48.940360 containerd[1451]: time="2026-03-02T13:06:48.940022847Z" level=info msg="Start streaming server" Mar 2 13:06:48.941117 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 13:06:48.941746 containerd[1451]: time="2026-03-02T13:06:48.941648300Z" level=info msg="containerd successfully booted in 0.159350s" Mar 2 13:06:48.956194 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 13:06:48.987305 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 13:06:49.007249 sshd[1518]: Accepted publickey for core from 10.0.0.1 port 54882 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:06:49.010797 sshd[1518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:49.015633 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 2 13:06:49.020381 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 13:06:49.045647 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 13:06:49.069731 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 13:06:49.088208 systemd-logind[1433]: New session 1 of user core. Mar 2 13:06:49.153394 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 13:06:49.196801 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 13:06:49.238304 (systemd)[1534]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 13:06:49.641362 systemd[1534]: Queued start job for default target default.target. Mar 2 13:06:49.674310 systemd[1534]: Created slice app.slice - User Application Slice. Mar 2 13:06:49.674364 systemd[1534]: Reached target paths.target - Paths. Mar 2 13:06:49.674389 systemd[1534]: Reached target timers.target - Timers. Mar 2 13:06:49.684054 systemd[1534]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 13:06:49.756762 systemd[1534]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 13:06:49.756979 systemd[1534]: Reached target sockets.target - Sockets. Mar 2 13:06:49.757000 systemd[1534]: Reached target basic.target - Basic System. Mar 2 13:06:49.757072 systemd[1534]: Reached target default.target - Main User Target. Mar 2 13:06:49.757127 systemd[1534]: Startup finished in 498ms. Mar 2 13:06:49.757711 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 13:06:49.838050 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 13:06:49.869373 tar[1449]: linux-amd64/README.md Mar 2 13:06:49.897979 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 13:06:49.982112 systemd[1]: Started sshd@1-10.0.0.56:22-10.0.0.1:54174.service - OpenSSH per-connection server daemon (10.0.0.1:54174). Mar 2 13:06:50.113718 sshd[1548]: Accepted publickey for core from 10.0.0.1 port 54174 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:06:50.119314 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:50.150651 systemd-logind[1433]: New session 2 of user core. Mar 2 13:06:50.160742 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 13:06:50.340276 sshd[1548]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:50.361794 systemd[1]: sshd@1-10.0.0.56:22-10.0.0.1:54174.service: Deactivated successfully. Mar 2 13:06:50.367141 systemd[1]: session-2.scope: Deactivated successfully. Mar 2 13:06:50.369691 systemd-logind[1433]: Session 2 logged out. Waiting for processes to exit. Mar 2 13:06:50.390963 systemd[1]: Started sshd@2-10.0.0.56:22-10.0.0.1:54190.service - OpenSSH per-connection server daemon (10.0.0.1:54190). Mar 2 13:06:50.413721 systemd-logind[1433]: Removed session 2. Mar 2 13:06:50.457448 sshd[1555]: Accepted publickey for core from 10.0.0.1 port 54190 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:06:50.463369 sshd[1555]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:06:50.486991 systemd-logind[1433]: New session 3 of user core. Mar 2 13:06:50.511026 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 13:06:50.738813 sshd[1555]: pam_unix(sshd:session): session closed for user core Mar 2 13:06:50.760933 systemd[1]: sshd@2-10.0.0.56:22-10.0.0.1:54190.service: Deactivated successfully. Mar 2 13:06:50.777807 systemd[1]: session-3.scope: Deactivated successfully. Mar 2 13:06:50.785020 systemd-logind[1433]: Session 3 logged out. Waiting for processes to exit. Mar 2 13:06:50.798735 systemd-logind[1433]: Removed session 3. Mar 2 13:06:51.502035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:06:51.516311 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 13:06:51.517327 (kubelet)[1565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:06:51.706136 systemd[1]: Startup finished in 5.853s (kernel) + 18.405s (initrd) + 10.540s (userspace) = 34.799s. Mar 2 13:06:55.588521 kubelet[1565]: E0302 13:06:55.585674 1565 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:06:55.608659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:06:55.645658 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:06:55.653125 systemd[1]: kubelet.service: Consumed 3.457s CPU time. Mar 2 13:07:00.805718 systemd[1]: Started sshd@3-10.0.0.56:22-10.0.0.1:43766.service - OpenSSH per-connection server daemon (10.0.0.1:43766). Mar 2 13:07:00.946809 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 43766 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:07:00.953117 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:07:00.990188 systemd-logind[1433]: New session 4 of user core. Mar 2 13:07:01.015317 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 13:07:01.147207 sshd[1580]: pam_unix(sshd:session): session closed for user core Mar 2 13:07:01.176730 systemd[1]: sshd@3-10.0.0.56:22-10.0.0.1:43766.service: Deactivated successfully. Mar 2 13:07:01.181477 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 13:07:01.184803 systemd-logind[1433]: Session 4 logged out. Waiting for processes to exit. Mar 2 13:07:01.207791 systemd[1]: Started sshd@4-10.0.0.56:22-10.0.0.1:43770.service - OpenSSH per-connection server daemon (10.0.0.1:43770). Mar 2 13:07:01.210735 systemd-logind[1433]: Removed session 4. Mar 2 13:07:01.318152 sshd[1587]: Accepted publickey for core from 10.0.0.1 port 43770 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:07:01.320758 sshd[1587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:07:01.375067 systemd-logind[1433]: New session 5 of user core. Mar 2 13:07:01.395544 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 13:07:01.504119 sshd[1587]: pam_unix(sshd:session): session closed for user core Mar 2 13:07:01.516302 systemd[1]: sshd@4-10.0.0.56:22-10.0.0.1:43770.service: Deactivated successfully. Mar 2 13:07:01.543478 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 13:07:01.548393 systemd-logind[1433]: Session 5 logged out. Waiting for processes to exit. Mar 2 13:07:01.561482 systemd[1]: Started sshd@5-10.0.0.56:22-10.0.0.1:43786.service - OpenSSH per-connection server daemon (10.0.0.1:43786). Mar 2 13:07:01.565084 systemd-logind[1433]: Removed session 5. Mar 2 13:07:01.613749 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 43786 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:07:01.616764 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:07:01.657231 systemd-logind[1433]: New session 6 of user core. Mar 2 13:07:01.675390 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 13:07:01.785717 sshd[1594]: pam_unix(sshd:session): session closed for user core Mar 2 13:07:01.818217 systemd[1]: sshd@5-10.0.0.56:22-10.0.0.1:43786.service: Deactivated successfully. Mar 2 13:07:01.853310 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 13:07:01.857950 systemd-logind[1433]: Session 6 logged out. Waiting for processes to exit. Mar 2 13:07:01.871608 systemd[1]: Started sshd@6-10.0.0.56:22-10.0.0.1:43796.service - OpenSSH per-connection server daemon (10.0.0.1:43796). Mar 2 13:07:01.880739 systemd-logind[1433]: Removed session 6. Mar 2 13:07:02.004433 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 43796 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:07:02.015249 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:07:02.069455 systemd-logind[1433]: New session 7 of user core. Mar 2 13:07:02.089408 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 13:07:02.217153 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 2 13:07:02.217974 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:07:02.292719 sudo[1604]: pam_unix(sudo:session): session closed for user root Mar 2 13:07:02.308996 sshd[1601]: pam_unix(sshd:session): session closed for user core Mar 2 13:07:02.344636 systemd[1]: sshd@6-10.0.0.56:22-10.0.0.1:43796.service: Deactivated successfully. Mar 2 13:07:02.352247 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 13:07:02.357051 systemd-logind[1433]: Session 7 logged out. Waiting for processes to exit. Mar 2 13:07:02.376436 systemd[1]: Started sshd@7-10.0.0.56:22-10.0.0.1:43810.service - OpenSSH per-connection server daemon (10.0.0.1:43810). Mar 2 13:07:02.379303 systemd-logind[1433]: Removed session 7. Mar 2 13:07:03.882283 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 43810 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:07:03.907462 sshd[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:07:04.043032 systemd-logind[1433]: New session 8 of user core. Mar 2 13:07:04.075786 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 13:07:04.304130 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 2 13:07:04.306680 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:07:04.352777 sudo[1614]: pam_unix(sudo:session): session closed for user root Mar 2 13:07:04.385526 sudo[1613]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 2 13:07:04.386333 sudo[1613]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:07:04.562219 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 2 13:07:04.659476 auditctl[1617]: No rules Mar 2 13:07:04.661068 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 13:07:04.661929 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 2 13:07:04.705675 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:07:05.079938 augenrules[1635]: No rules Mar 2 13:07:05.084220 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:07:05.088230 sudo[1613]: pam_unix(sudo:session): session closed for user root Mar 2 13:07:05.092593 sshd[1609]: pam_unix(sshd:session): session closed for user core Mar 2 13:07:05.111254 systemd[1]: sshd@7-10.0.0.56:22-10.0.0.1:43810.service: Deactivated successfully. Mar 2 13:07:05.118044 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 13:07:05.119595 systemd-logind[1433]: Session 8 logged out. Waiting for processes to exit. Mar 2 13:07:05.171238 systemd[1]: Started sshd@8-10.0.0.56:22-10.0.0.1:43824.service - OpenSSH per-connection server daemon (10.0.0.1:43824). Mar 2 13:07:05.176472 systemd-logind[1433]: Removed session 8. Mar 2 13:07:05.417261 sshd[1643]: Accepted publickey for core from 10.0.0.1 port 43824 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:07:05.418601 sshd[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:07:05.489492 systemd-logind[1433]: New session 9 of user core. Mar 2 13:07:05.514093 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 13:07:05.615198 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 13:07:05.692221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:07:05.794512 sudo[1647]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 13:07:05.796550 sudo[1647]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:07:06.454277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:07:06.460591 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:07:06.612005 kubelet[1665]: E0302 13:07:06.611282 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:07:06.643013 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:07:06.643418 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:07:06.871583 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 13:07:06.875324 (dockerd)[1681]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 13:07:08.112932 dockerd[1681]: time="2026-03-02T13:07:08.112537971Z" level=info msg="Starting up" Mar 2 13:07:08.708566 dockerd[1681]: time="2026-03-02T13:07:08.706453954Z" level=info msg="Loading containers: start." Mar 2 13:07:09.471638 kernel: Initializing XFRM netlink socket Mar 2 13:07:10.266756 systemd-networkd[1374]: docker0: Link UP Mar 2 13:07:10.380391 dockerd[1681]: time="2026-03-02T13:07:10.378410143Z" level=info msg="Loading containers: done." Mar 2 13:07:10.473345 dockerd[1681]: time="2026-03-02T13:07:10.473176881Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 13:07:10.473727 dockerd[1681]: time="2026-03-02T13:07:10.473652999Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 2 13:07:10.475385 dockerd[1681]: time="2026-03-02T13:07:10.474062884Z" level=info msg="Daemon has completed initialization" Mar 2 13:07:10.715080 dockerd[1681]: time="2026-03-02T13:07:10.713301928Z" level=info msg="API listen on /run/docker.sock" Mar 2 13:07:10.715726 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 13:07:12.613049 containerd[1451]: time="2026-03-02T13:07:12.612539243Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\"" Mar 2 13:07:13.786622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2826569146.mount: Deactivated successfully. Mar 2 13:07:16.782407 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 13:07:16.799956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:07:17.264193 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:07:17.357603 (kubelet)[1895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:07:18.044893 kubelet[1895]: E0302 13:07:18.044606 1895 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:07:18.050768 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:07:18.051372 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:07:18.052584 systemd[1]: kubelet.service: Consumed 1.229s CPU time. Mar 2 13:07:18.675555 containerd[1451]: time="2026-03-02T13:07:18.674709746Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:18.688134 containerd[1451]: time="2026-03-02T13:07:18.687311440Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.5: active requests=0, bytes read=27074497" Mar 2 13:07:18.693567 containerd[1451]: time="2026-03-02T13:07:18.693431486Z" level=info msg="ImageCreate event name:\"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:18.706536 containerd[1451]: time="2026-03-02T13:07:18.705956134Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:18.710190 containerd[1451]: time="2026-03-02T13:07:18.710100935Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.5\" with image id \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c548633fcd3b4aad59b70815be4c8be54a0fddaddc3fcffa9371eedb0e96417a\", size \"27071096\" in 6.097445015s" Mar 2 13:07:18.710190 containerd[1451]: time="2026-03-02T13:07:18.710157721Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.5\" returns image reference \"sha256:364ea2876e41b29691964751b6217cd2e343433690fbe16a5c6a236042684df3\"" Mar 2 13:07:18.712667 containerd[1451]: time="2026-03-02T13:07:18.712584135Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\"" Mar 2 13:07:22.607214 containerd[1451]: time="2026-03-02T13:07:22.606754620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:22.611300 containerd[1451]: time="2026-03-02T13:07:22.611201431Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.5: active requests=0, bytes read=21165823" Mar 2 13:07:22.614164 containerd[1451]: time="2026-03-02T13:07:22.614047566Z" level=info msg="ImageCreate event name:\"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:22.623005 containerd[1451]: time="2026-03-02T13:07:22.622513391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:22.625674 containerd[1451]: time="2026-03-02T13:07:22.624058175Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.5\" with image id \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f0426100c873816560c520d542fa28999a98dad909edd04365f3b0eead790da3\", size \"22822771\" in 3.911392783s" Mar 2 13:07:22.625674 containerd[1451]: time="2026-03-02T13:07:22.624133458Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.5\" returns image reference \"sha256:8926c34822743bb97f9003f92c30127bfeaad8bed71cd36f1c861ed8fda2c154\"" Mar 2 13:07:22.625674 containerd[1451]: time="2026-03-02T13:07:22.625110108Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\"" Mar 2 13:07:25.247665 containerd[1451]: time="2026-03-02T13:07:25.245205069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:25.248640 containerd[1451]: time="2026-03-02T13:07:25.248395352Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.5: active requests=0, bytes read=15729824" Mar 2 13:07:25.250733 containerd[1451]: time="2026-03-02T13:07:25.250655164Z" level=info msg="ImageCreate event name:\"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:25.258936 containerd[1451]: time="2026-03-02T13:07:25.258593222Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:25.260091 containerd[1451]: time="2026-03-02T13:07:25.259962722Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.5\" with image id \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b67b0d627c8e99ffa362bd4d9a60ca9a6c449e363a5f88d2aa8c224bd84ca51d\", size \"17386790\" in 2.634818893s" Mar 2 13:07:25.260091 containerd[1451]: time="2026-03-02T13:07:25.260015244Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.5\" returns image reference \"sha256:f6b3520b1732b4980b2528fe5622e62be26bb6a8d38da81349cb6ccd3a1e6d65\"" Mar 2 13:07:25.260975 containerd[1451]: time="2026-03-02T13:07:25.260931322Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\"" Mar 2 13:07:26.583538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4100000917.mount: Deactivated successfully. Mar 2 13:07:27.180475 containerd[1451]: time="2026-03-02T13:07:27.180347109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:27.183321 containerd[1451]: time="2026-03-02T13:07:27.183112257Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.5: active requests=0, bytes read=25861770" Mar 2 13:07:27.185149 containerd[1451]: time="2026-03-02T13:07:27.185096085Z" level=info msg="ImageCreate event name:\"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:27.192923 containerd[1451]: time="2026-03-02T13:07:27.192581008Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:27.194030 containerd[1451]: time="2026-03-02T13:07:27.193978532Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.5\" with image id \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a22a3bf452d07af3b5a3064b089d2ad6579d5dd3b850386e05cc0f36dc3f4cf\", size \"25860789\" in 1.933006987s" Mar 2 13:07:27.194108 containerd[1451]: time="2026-03-02T13:07:27.194037406Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.5\" returns image reference \"sha256:38728cde323c302ed9eca4f1b7c0080d17db50144e39398fcf901d9df13f0c3e\"" Mar 2 13:07:27.195068 containerd[1451]: time="2026-03-02T13:07:27.195004373Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Mar 2 13:07:28.273498 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 2 13:07:28.291393 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:07:28.615938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2790244966.mount: Deactivated successfully. Mar 2 13:07:28.912310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:07:28.942419 (kubelet)[1932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:07:29.116699 kubelet[1932]: E0302 13:07:29.116477 1932 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:07:29.132536 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:07:29.133240 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:07:32.931789 update_engine[1435]: I20260302 13:07:32.931020 1435 update_attempter.cc:509] Updating boot flags... Mar 2 13:07:33.416934 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1996) Mar 2 13:07:34.791243 containerd[1451]: time="2026-03-02T13:07:34.787742404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:34.791243 containerd[1451]: time="2026-03-02T13:07:34.790682863Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388007" Mar 2 13:07:34.798375 containerd[1451]: time="2026-03-02T13:07:34.792551507Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:34.816449 containerd[1451]: time="2026-03-02T13:07:34.813489196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:34.816449 containerd[1451]: time="2026-03-02T13:07:34.814988176Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 7.619912236s" Mar 2 13:07:34.816449 containerd[1451]: time="2026-03-02T13:07:34.815037139Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Mar 2 13:07:34.821327 containerd[1451]: time="2026-03-02T13:07:34.817543558Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Mar 2 13:07:35.750617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2213728363.mount: Deactivated successfully. Mar 2 13:07:35.768965 containerd[1451]: time="2026-03-02T13:07:35.768336836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:35.770537 containerd[1451]: time="2026-03-02T13:07:35.770298901Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321218" Mar 2 13:07:35.777598 containerd[1451]: time="2026-03-02T13:07:35.776742281Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:35.784637 containerd[1451]: time="2026-03-02T13:07:35.784479643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:35.788159 containerd[1451]: time="2026-03-02T13:07:35.787296649Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 969.719735ms" Mar 2 13:07:35.788159 containerd[1451]: time="2026-03-02T13:07:35.787382305Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Mar 2 13:07:35.793431 containerd[1451]: time="2026-03-02T13:07:35.793287355Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Mar 2 13:07:36.678522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount467831706.mount: Deactivated successfully. Mar 2 13:07:39.272182 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 2 13:07:39.291120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:07:39.624436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:07:39.650081 (kubelet)[2064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:07:39.757790 kubelet[2064]: E0302 13:07:39.757651 2064 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:07:39.762671 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:07:39.763110 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:07:40.494206 containerd[1451]: time="2026-03-02T13:07:40.493239055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:40.497623 containerd[1451]: time="2026-03-02T13:07:40.497205154Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22860674" Mar 2 13:07:40.501991 containerd[1451]: time="2026-03-02T13:07:40.500793981Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:40.511479 containerd[1451]: time="2026-03-02T13:07:40.510937686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:07:40.514370 containerd[1451]: time="2026-03-02T13:07:40.513990248Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 4.72062469s" Mar 2 13:07:40.514370 containerd[1451]: time="2026-03-02T13:07:40.514081860Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Mar 2 13:07:47.719539 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:07:47.757196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:07:47.822378 systemd[1]: Reloading requested from client PID 2118 ('systemctl') (unit session-9.scope)... Mar 2 13:07:47.823285 systemd[1]: Reloading... Mar 2 13:07:48.355053 zram_generator::config[2157]: No configuration found. Mar 2 13:07:49.045498 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:07:49.196310 systemd[1]: Reloading finished in 1369 ms. Mar 2 13:07:49.322642 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:07:49.339981 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:07:49.340478 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:07:49.373736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:07:49.905977 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:07:49.936429 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:07:50.459480 kubelet[2206]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 13:07:50.459480 kubelet[2206]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:07:50.459480 kubelet[2206]: I0302 13:07:50.458970 2206 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 13:07:51.282661 kubelet[2206]: I0302 13:07:51.281966 2206 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 2 13:07:51.282661 kubelet[2206]: I0302 13:07:51.282153 2206 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:07:51.282661 kubelet[2206]: I0302 13:07:51.282277 2206 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 13:07:51.282661 kubelet[2206]: I0302 13:07:51.282323 2206 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:07:51.291970 kubelet[2206]: I0302 13:07:51.286438 2206 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 13:07:51.343456 kubelet[2206]: E0302 13:07:51.342710 2206 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:07:51.345231 kubelet[2206]: I0302 13:07:51.344884 2206 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:07:51.402972 kubelet[2206]: E0302 13:07:51.399933 2206 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:07:51.408272 kubelet[2206]: I0302 13:07:51.400747 2206 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 2 13:07:51.470462 kubelet[2206]: I0302 13:07:51.470047 2206 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 13:07:51.472461 kubelet[2206]: I0302 13:07:51.472203 2206 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:07:51.472514 kubelet[2206]: I0302 13:07:51.472259 2206 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:07:51.472514 kubelet[2206]: I0302 13:07:51.472510 2206 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 13:07:51.475343 kubelet[2206]: I0302 13:07:51.472526 2206 container_manager_linux.go:306] "Creating device plugin manager" Mar 2 13:07:51.475343 kubelet[2206]: I0302 13:07:51.473013 2206 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 13:07:51.477513 kubelet[2206]: I0302 13:07:51.477157 2206 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:07:51.478196 kubelet[2206]: I0302 13:07:51.478109 2206 kubelet.go:475] "Attempting to sync node with API server" Mar 2 13:07:51.478196 kubelet[2206]: I0302 13:07:51.478198 2206 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:07:51.478196 kubelet[2206]: I0302 13:07:51.478244 2206 kubelet.go:387] "Adding apiserver pod source" Mar 2 13:07:51.478719 kubelet[2206]: I0302 13:07:51.478417 2206 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:07:51.483664 kubelet[2206]: E0302 13:07:51.483324 2206 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:07:51.486924 kubelet[2206]: I0302 13:07:51.484416 2206 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:07:51.486924 kubelet[2206]: I0302 13:07:51.485385 2206 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:07:51.486924 kubelet[2206]: I0302 13:07:51.485489 2206 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 13:07:51.486924 kubelet[2206]: E0302 13:07:51.485479 2206 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:07:51.486924 kubelet[2206]: W0302 13:07:51.485657 2206 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 13:07:51.501741 kubelet[2206]: I0302 13:07:51.501642 2206 server.go:1262] "Started kubelet" Mar 2 13:07:51.502210 kubelet[2206]: I0302 13:07:51.502109 2206 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:07:51.531702 kubelet[2206]: E0302 13:07:51.511722 2206 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.56:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.56:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.189908223d8c5009 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:07:51.501516809 +0000 UTC m=+1.421128744,LastTimestamp:2026-03-02 13:07:51.501516809 +0000 UTC m=+1.421128744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:07:51.532158 kubelet[2206]: I0302 13:07:51.531784 2206 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:07:51.532504 kubelet[2206]: I0302 13:07:51.532395 2206 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 13:07:51.533365 kubelet[2206]: I0302 13:07:51.533222 2206 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:07:51.536779 kubelet[2206]: I0302 13:07:51.536747 2206 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 13:07:51.540342 kubelet[2206]: I0302 13:07:51.539924 2206 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:07:51.541034 kubelet[2206]: I0302 13:07:51.539952 2206 server.go:310] "Adding debug handlers to kubelet server" Mar 2 13:07:51.543167 kubelet[2206]: I0302 13:07:51.543141 2206 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 2 13:07:51.543521 kubelet[2206]: E0302 13:07:51.543492 2206 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:07:51.548788 kubelet[2206]: I0302 13:07:51.548405 2206 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 13:07:51.548788 kubelet[2206]: I0302 13:07:51.548513 2206 reconciler.go:29] "Reconciler: start to sync state" Mar 2 13:07:51.549035 kubelet[2206]: I0302 13:07:51.548983 2206 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:07:51.549204 kubelet[2206]: I0302 13:07:51.549101 2206 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:07:51.553173 kubelet[2206]: E0302 13:07:51.552539 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="200ms" Mar 2 13:07:51.553173 kubelet[2206]: E0302 13:07:51.552634 2206 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:07:51.556438 kubelet[2206]: E0302 13:07:51.556403 2206 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:07:51.558964 kubelet[2206]: I0302 13:07:51.558707 2206 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:07:51.678710 kubelet[2206]: E0302 13:07:51.650902 2206 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:07:51.831176 kubelet[2206]: E0302 13:07:51.828119 2206 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:07:51.834518 kubelet[2206]: E0302 13:07:51.834258 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="400ms" Mar 2 13:07:51.871882 kubelet[2206]: I0302 13:07:51.871720 2206 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 13:07:51.871882 kubelet[2206]: I0302 13:07:51.871793 2206 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 13:07:51.871882 kubelet[2206]: I0302 13:07:51.871887 2206 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:07:51.887262 kubelet[2206]: I0302 13:07:51.886321 2206 policy_none.go:49] "None policy: Start" Mar 2 13:07:51.887262 kubelet[2206]: I0302 13:07:51.886403 2206 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 13:07:51.887262 kubelet[2206]: I0302 13:07:51.886429 2206 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 13:07:51.894781 kubelet[2206]: I0302 13:07:51.894657 2206 policy_none.go:47] "Start" Mar 2 13:07:51.915698 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 2 13:07:51.934255 kubelet[2206]: E0302 13:07:51.933448 2206 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:07:51.943988 kubelet[2206]: I0302 13:07:51.941965 2206 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 13:07:51.948617 kubelet[2206]: I0302 13:07:51.948520 2206 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 13:07:51.949158 kubelet[2206]: I0302 13:07:51.949142 2206 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 2 13:07:51.949367 kubelet[2206]: I0302 13:07:51.949351 2206 kubelet.go:2428] "Starting kubelet main sync loop" Mar 2 13:07:51.952994 kubelet[2206]: E0302 13:07:51.952956 2206 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:07:51.958365 kubelet[2206]: E0302 13:07:51.958325 2206 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:07:51.968945 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 2 13:07:51.986399 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 2 13:07:52.009961 kubelet[2206]: E0302 13:07:52.009378 2206 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:07:52.009961 kubelet[2206]: I0302 13:07:52.009913 2206 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 13:07:52.010205 kubelet[2206]: I0302 13:07:52.009970 2206 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:07:52.011304 kubelet[2206]: I0302 13:07:52.010503 2206 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 13:07:52.015273 kubelet[2206]: E0302 13:07:52.014905 2206 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:07:52.015273 kubelet[2206]: E0302 13:07:52.014989 2206 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:07:52.192793 kubelet[2206]: I0302 13:07:52.174250 2206 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:07:52.263796 kubelet[2206]: I0302 13:07:52.262759 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:07:52.263796 kubelet[2206]: I0302 13:07:52.263516 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b949c15d364f968f7ca7c3d0e70a550-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b949c15d364f968f7ca7c3d0e70a550\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:07:52.263796 kubelet[2206]: I0302 13:07:52.263624 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b949c15d364f968f7ca7c3d0e70a550-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b949c15d364f968f7ca7c3d0e70a550\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:07:52.263796 kubelet[2206]: I0302 13:07:52.263686 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b949c15d364f968f7ca7c3d0e70a550-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3b949c15d364f968f7ca7c3d0e70a550\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:07:52.263796 kubelet[2206]: I0302 13:07:52.263714 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:07:52.267618 kubelet[2206]: I0302 13:07:52.263799 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:07:52.267618 kubelet[2206]: I0302 13:07:52.263895 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:07:52.267618 kubelet[2206]: I0302 13:07:52.263922 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:07:52.267618 kubelet[2206]: E0302 13:07:52.263279 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="800ms" Mar 2 13:07:52.267618 kubelet[2206]: E0302 13:07:52.264202 2206 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Mar 2 13:07:52.286691 systemd[1]: Created slice kubepods-burstable-pod3b949c15d364f968f7ca7c3d0e70a550.slice - libcontainer container kubepods-burstable-pod3b949c15d364f968f7ca7c3d0e70a550.slice. Mar 2 13:07:52.338737 kubelet[2206]: E0302 13:07:52.335265 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:52.343997 systemd[1]: Created slice kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice - libcontainer container kubepods-burstable-poddb0989cdb653dfec284dd4f35625e9e7.slice. Mar 2 13:07:52.357920 kubelet[2206]: E0302 13:07:52.356432 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:52.365051 kubelet[2206]: I0302 13:07:52.364643 2206 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 2 13:07:52.383278 systemd[1]: Created slice kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice - libcontainer container kubepods-burstable-pod89efda49e166906783d8d868d41ebb86.slice. Mar 2 13:07:52.388307 kubelet[2206]: E0302 13:07:52.388150 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:52.498080 kubelet[2206]: I0302 13:07:52.495443 2206 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:07:52.516513 kubelet[2206]: E0302 13:07:52.512497 2206 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Mar 2 13:07:52.550407 kubelet[2206]: E0302 13:07:52.550281 2206 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:07:52.646657 kubelet[2206]: E0302 13:07:52.645156 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:52.649798 containerd[1451]: time="2026-03-02T13:07:52.648262886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3b949c15d364f968f7ca7c3d0e70a550,Namespace:kube-system,Attempt:0,}" Mar 2 13:07:52.670367 kubelet[2206]: E0302 13:07:52.670144 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:52.671362 containerd[1451]: time="2026-03-02T13:07:52.670953301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,}" Mar 2 13:07:52.697068 kubelet[2206]: E0302 13:07:52.692991 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:52.697240 containerd[1451]: time="2026-03-02T13:07:52.693733220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,}" Mar 2 13:07:52.784737 kubelet[2206]: E0302 13:07:52.784503 2206 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:07:52.865262 kubelet[2206]: E0302 13:07:52.864987 2206 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:07:52.928117 kubelet[2206]: I0302 13:07:52.927918 2206 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:07:52.928608 kubelet[2206]: E0302 13:07:52.928447 2206 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Mar 2 13:07:53.066941 kubelet[2206]: E0302 13:07:53.066153 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="1.6s" Mar 2 13:07:53.355026 kubelet[2206]: E0302 13:07:53.354230 2206 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:07:53.503075 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1374771145.mount: Deactivated successfully. Mar 2 13:07:53.506393 kubelet[2206]: E0302 13:07:53.505359 2206 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:07:53.530610 containerd[1451]: time="2026-03-02T13:07:53.530401665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:07:53.538902 containerd[1451]: time="2026-03-02T13:07:53.538690751Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 2 13:07:53.543166 containerd[1451]: time="2026-03-02T13:07:53.542989203Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:07:53.552895 containerd[1451]: time="2026-03-02T13:07:53.552580001Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:07:53.552895 containerd[1451]: time="2026-03-02T13:07:53.552700052Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:07:53.555590 containerd[1451]: time="2026-03-02T13:07:53.555288234Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:07:53.556171 containerd[1451]: time="2026-03-02T13:07:53.556078934Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:07:53.574456 containerd[1451]: time="2026-03-02T13:07:53.573903947Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 925.395085ms" Mar 2 13:07:53.574456 containerd[1451]: time="2026-03-02T13:07:53.573930443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:07:53.575229 containerd[1451]: time="2026-03-02T13:07:53.575050349Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 881.092411ms" Mar 2 13:07:53.577784 containerd[1451]: time="2026-03-02T13:07:53.577191069Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 906.139941ms" Mar 2 13:07:53.734341 kubelet[2206]: I0302 13:07:53.731355 2206 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:07:53.738310 kubelet[2206]: E0302 13:07:53.736325 2206 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Mar 2 13:07:54.051324 containerd[1451]: time="2026-03-02T13:07:54.032123661Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:07:54.051324 containerd[1451]: time="2026-03-02T13:07:54.034186769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:07:54.051324 containerd[1451]: time="2026-03-02T13:07:54.034219437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:07:54.051324 containerd[1451]: time="2026-03-02T13:07:54.034352821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:07:54.065188 containerd[1451]: time="2026-03-02T13:07:54.063678962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:07:54.065188 containerd[1451]: time="2026-03-02T13:07:54.063744552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:07:54.065188 containerd[1451]: time="2026-03-02T13:07:54.063794132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:07:54.065188 containerd[1451]: time="2026-03-02T13:07:54.064081686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:07:54.086629 containerd[1451]: time="2026-03-02T13:07:54.083568259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:07:54.086629 containerd[1451]: time="2026-03-02T13:07:54.083666869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:07:54.086629 containerd[1451]: time="2026-03-02T13:07:54.083690060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:07:54.086629 containerd[1451]: time="2026-03-02T13:07:54.083942781Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:07:54.143283 systemd[1]: Started cri-containerd-4844d673875f3d2cb134c6071c115045b511549d7a71f5752fa2119aa44f115a.scope - libcontainer container 4844d673875f3d2cb134c6071c115045b511549d7a71f5752fa2119aa44f115a. Mar 2 13:07:54.184543 systemd[1]: Started cri-containerd-83afdc9027e79f6da58212a15cb63f9907b60ad82c7e176c09d69f1b111f9ed1.scope - libcontainer container 83afdc9027e79f6da58212a15cb63f9907b60ad82c7e176c09d69f1b111f9ed1. Mar 2 13:07:54.340081 systemd[1]: Started cri-containerd-7c4e536bb8be5e2970158e3b64e43f2571d1cdb3e7a5bae179f90498722c6560.scope - libcontainer container 7c4e536bb8be5e2970158e3b64e43f2571d1cdb3e7a5bae179f90498722c6560. Mar 2 13:07:54.569548 kubelet[2206]: E0302 13:07:54.566588 2206 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.56:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:07:54.656236 containerd[1451]: time="2026-03-02T13:07:54.656009548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:89efda49e166906783d8d868d41ebb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"83afdc9027e79f6da58212a15cb63f9907b60ad82c7e176c09d69f1b111f9ed1\"" Mar 2 13:07:54.661013 kubelet[2206]: E0302 13:07:54.660382 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:54.668664 kubelet[2206]: E0302 13:07:54.668007 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.56:6443: connect: connection refused" interval="3.2s" Mar 2 13:07:54.671655 containerd[1451]: time="2026-03-02T13:07:54.668934917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:db0989cdb653dfec284dd4f35625e9e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"4844d673875f3d2cb134c6071c115045b511549d7a71f5752fa2119aa44f115a\"" Mar 2 13:07:54.671748 kubelet[2206]: E0302 13:07:54.670955 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:54.684565 containerd[1451]: time="2026-03-02T13:07:54.684330566Z" level=info msg="CreateContainer within sandbox \"83afdc9027e79f6da58212a15cb63f9907b60ad82c7e176c09d69f1b111f9ed1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 13:07:54.690907 containerd[1451]: time="2026-03-02T13:07:54.690577031Z" level=info msg="CreateContainer within sandbox \"4844d673875f3d2cb134c6071c115045b511549d7a71f5752fa2119aa44f115a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 13:07:54.697672 containerd[1451]: time="2026-03-02T13:07:54.697570431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3b949c15d364f968f7ca7c3d0e70a550,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c4e536bb8be5e2970158e3b64e43f2571d1cdb3e7a5bae179f90498722c6560\"" Mar 2 13:07:54.699422 kubelet[2206]: E0302 13:07:54.699031 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:54.708661 containerd[1451]: time="2026-03-02T13:07:54.708585984Z" level=info msg="CreateContainer within sandbox \"7c4e536bb8be5e2970158e3b64e43f2571d1cdb3e7a5bae179f90498722c6560\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 13:07:54.751702 containerd[1451]: time="2026-03-02T13:07:54.751572487Z" level=info msg="CreateContainer within sandbox \"83afdc9027e79f6da58212a15cb63f9907b60ad82c7e176c09d69f1b111f9ed1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0bf5e055630cb2aa20aec15227cf05854166c3016962143aefc49132b31439a6\"" Mar 2 13:07:54.752617 containerd[1451]: time="2026-03-02T13:07:54.752586413Z" level=info msg="StartContainer for \"0bf5e055630cb2aa20aec15227cf05854166c3016962143aefc49132b31439a6\"" Mar 2 13:07:54.761942 containerd[1451]: time="2026-03-02T13:07:54.761790396Z" level=info msg="CreateContainer within sandbox \"4844d673875f3d2cb134c6071c115045b511549d7a71f5752fa2119aa44f115a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"baa1e5df67b26e5bbbe18debcc24354520cf7791f4b6a274fa80d018d6dbc677\"" Mar 2 13:07:54.764433 containerd[1451]: time="2026-03-02T13:07:54.763030929Z" level=info msg="StartContainer for \"baa1e5df67b26e5bbbe18debcc24354520cf7791f4b6a274fa80d018d6dbc677\"" Mar 2 13:07:54.769166 containerd[1451]: time="2026-03-02T13:07:54.769077534Z" level=info msg="CreateContainer within sandbox \"7c4e536bb8be5e2970158e3b64e43f2571d1cdb3e7a5bae179f90498722c6560\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c94d1efb6f51b282cd8dd95def9e15143d7825c28f443e4c96eaf12fda4abbe8\"" Mar 2 13:07:54.770116 containerd[1451]: time="2026-03-02T13:07:54.770039495Z" level=info msg="StartContainer for \"c94d1efb6f51b282cd8dd95def9e15143d7825c28f443e4c96eaf12fda4abbe8\"" Mar 2 13:07:54.985689 systemd[1]: Started cri-containerd-0bf5e055630cb2aa20aec15227cf05854166c3016962143aefc49132b31439a6.scope - libcontainer container 0bf5e055630cb2aa20aec15227cf05854166c3016962143aefc49132b31439a6. Mar 2 13:07:55.007779 systemd[1]: Started cri-containerd-baa1e5df67b26e5bbbe18debcc24354520cf7791f4b6a274fa80d018d6dbc677.scope - libcontainer container baa1e5df67b26e5bbbe18debcc24354520cf7791f4b6a274fa80d018d6dbc677. Mar 2 13:07:55.044531 systemd[1]: Started cri-containerd-c94d1efb6f51b282cd8dd95def9e15143d7825c28f443e4c96eaf12fda4abbe8.scope - libcontainer container c94d1efb6f51b282cd8dd95def9e15143d7825c28f443e4c96eaf12fda4abbe8. Mar 2 13:07:55.354482 kubelet[2206]: I0302 13:07:55.354187 2206 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:07:55.358962 kubelet[2206]: E0302 13:07:55.358715 2206 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": dial tcp 10.0.0.56:6443: connect: connection refused" node="localhost" Mar 2 13:07:55.371077 containerd[1451]: time="2026-03-02T13:07:55.368328482Z" level=info msg="StartContainer for \"0bf5e055630cb2aa20aec15227cf05854166c3016962143aefc49132b31439a6\" returns successfully" Mar 2 13:07:55.381008 kubelet[2206]: E0302 13:07:55.379518 2206 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.56:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:07:55.410575 containerd[1451]: time="2026-03-02T13:07:55.409611085Z" level=info msg="StartContainer for \"baa1e5df67b26e5bbbe18debcc24354520cf7791f4b6a274fa80d018d6dbc677\" returns successfully" Mar 2 13:07:55.414727 containerd[1451]: time="2026-03-02T13:07:55.412363413Z" level=info msg="StartContainer for \"c94d1efb6f51b282cd8dd95def9e15143d7825c28f443e4c96eaf12fda4abbe8\" returns successfully" Mar 2 13:07:55.609395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1211367326.mount: Deactivated successfully. Mar 2 13:07:55.644941 kubelet[2206]: E0302 13:07:55.644783 2206 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.56:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.56:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:07:56.166780 kubelet[2206]: E0302 13:07:56.165967 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:56.166780 kubelet[2206]: E0302 13:07:56.166321 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:56.226679 kubelet[2206]: E0302 13:07:56.226379 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:56.229903 kubelet[2206]: E0302 13:07:56.228205 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:56.233136 kubelet[2206]: E0302 13:07:56.233108 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:56.234038 kubelet[2206]: E0302 13:07:56.234009 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:57.294426 kubelet[2206]: E0302 13:07:57.293620 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:57.294426 kubelet[2206]: E0302 13:07:57.294026 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:57.302348 kubelet[2206]: E0302 13:07:57.300045 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:57.302348 kubelet[2206]: E0302 13:07:57.300232 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:57.302348 kubelet[2206]: E0302 13:07:57.301983 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:57.302348 kubelet[2206]: E0302 13:07:57.302119 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:58.293069 kubelet[2206]: E0302 13:07:58.291947 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:58.293069 kubelet[2206]: E0302 13:07:58.292228 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:58.297690 kubelet[2206]: E0302 13:07:58.297610 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:58.298513 kubelet[2206]: E0302 13:07:58.297812 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:58.301888 kubelet[2206]: E0302 13:07:58.299494 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:07:58.301888 kubelet[2206]: E0302 13:07:58.299976 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:07:58.590757 kubelet[2206]: I0302 13:07:58.577597 2206 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:08:00.267272 kubelet[2206]: E0302 13:08:00.266910 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:08:00.267272 kubelet[2206]: E0302 13:08:00.267403 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:02.030125 kubelet[2206]: E0302 13:08:02.026248 2206 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:08:06.533206 kubelet[2206]: E0302 13:08:06.532974 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:08:06.533206 kubelet[2206]: E0302 13:08:06.533395 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:06.662353 kubelet[2206]: E0302 13:08:06.661694 2206 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.56:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:08:06.862553 kubelet[2206]: E0302 13:08:06.860056 2206 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.56:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{localhost.189908223d8c5009 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:07:51.501516809 +0000 UTC m=+1.421128744,LastTimestamp:2026-03-02 13:07:51.501516809 +0000 UTC m=+1.421128744,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:08:07.411335 kubelet[2206]: E0302 13:08:07.411201 2206 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.56:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:08:07.939967 kubelet[2206]: E0302 13:08:07.915402 2206 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.56:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: TLS handshake timeout (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 2 13:08:08.592764 kubelet[2206]: E0302 13:08:08.591736 2206 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.56:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="localhost" Mar 2 13:08:09.582280 kubelet[2206]: E0302 13:08:09.581984 2206 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 2 13:08:09.641480 kubelet[2206]: I0302 13:08:09.641184 2206 apiserver.go:52] "Watching apiserver" Mar 2 13:08:09.652386 kubelet[2206]: I0302 13:08:09.652333 2206 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 13:08:10.094975 kubelet[2206]: E0302 13:08:10.092917 2206 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 2 13:08:10.267961 kubelet[2206]: E0302 13:08:10.265903 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:08:10.267961 kubelet[2206]: E0302 13:08:10.266556 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:10.722622 kubelet[2206]: E0302 13:08:10.722277 2206 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 2 13:08:12.036279 kubelet[2206]: E0302 13:08:12.035570 2206 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:08:13.024132 kubelet[2206]: E0302 13:08:13.024026 2206 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Mar 2 13:08:14.272977 kubelet[2206]: E0302 13:08:14.271172 2206 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:08:14.272977 kubelet[2206]: E0302 13:08:14.271376 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:14.325124 kubelet[2206]: E0302 13:08:14.324996 2206 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 2 13:08:14.995631 kubelet[2206]: I0302 13:08:14.995594 2206 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:08:15.014470 kubelet[2206]: I0302 13:08:15.014065 2206 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 13:08:15.049559 kubelet[2206]: I0302 13:08:15.049434 2206 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:08:15.081454 kubelet[2206]: E0302 13:08:15.081316 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:15.083050 kubelet[2206]: I0302 13:08:15.082793 2206 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:15.096007 kubelet[2206]: E0302 13:08:15.095742 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:15.096959 kubelet[2206]: I0302 13:08:15.096657 2206 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:08:15.106332 kubelet[2206]: E0302 13:08:15.106259 2206 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:15.583975 systemd[1]: Reloading requested from client PID 2492 ('systemctl') (unit session-9.scope)... Mar 2 13:08:15.584036 systemd[1]: Reloading... Mar 2 13:08:15.697001 zram_generator::config[2531]: No configuration found. Mar 2 13:08:15.869680 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:08:16.008480 systemd[1]: Reloading finished in 423 ms. Mar 2 13:08:16.079995 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:08:16.099978 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:08:16.100494 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:08:16.100578 systemd[1]: kubelet.service: Consumed 6.969s CPU time, 130.6M memory peak, 0B memory swap peak. Mar 2 13:08:16.116278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:08:16.315675 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:08:16.324496 (kubelet)[2576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:08:16.411961 kubelet[2576]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 13:08:16.411961 kubelet[2576]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:08:16.411961 kubelet[2576]: I0302 13:08:16.411712 2576 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 13:08:16.424430 kubelet[2576]: I0302 13:08:16.424104 2576 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Mar 2 13:08:16.424430 kubelet[2576]: I0302 13:08:16.424433 2576 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:08:16.424613 kubelet[2576]: I0302 13:08:16.424475 2576 watchdog_linux.go:95] "Systemd watchdog is not enabled" Mar 2 13:08:16.424613 kubelet[2576]: I0302 13:08:16.424490 2576 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:08:16.425245 kubelet[2576]: I0302 13:08:16.425092 2576 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 13:08:16.426613 kubelet[2576]: I0302 13:08:16.426512 2576 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 13:08:16.429368 kubelet[2576]: I0302 13:08:16.429337 2576 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:08:16.435800 kubelet[2576]: E0302 13:08:16.435586 2576 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:08:16.435800 kubelet[2576]: I0302 13:08:16.435755 2576 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Mar 2 13:08:16.448363 kubelet[2576]: I0302 13:08:16.448142 2576 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Mar 2 13:08:16.448717 kubelet[2576]: I0302 13:08:16.448656 2576 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:08:16.450938 kubelet[2576]: I0302 13:08:16.448688 2576 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:08:16.450938 kubelet[2576]: I0302 13:08:16.449022 2576 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 13:08:16.450938 kubelet[2576]: I0302 13:08:16.449035 2576 container_manager_linux.go:306] "Creating device plugin manager" Mar 2 13:08:16.450938 kubelet[2576]: I0302 13:08:16.449068 2576 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Mar 2 13:08:16.450938 kubelet[2576]: I0302 13:08:16.449435 2576 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:08:16.451449 kubelet[2576]: I0302 13:08:16.449662 2576 kubelet.go:475] "Attempting to sync node with API server" Mar 2 13:08:16.451449 kubelet[2576]: I0302 13:08:16.449678 2576 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:08:16.451449 kubelet[2576]: I0302 13:08:16.449706 2576 kubelet.go:387] "Adding apiserver pod source" Mar 2 13:08:16.451449 kubelet[2576]: I0302 13:08:16.449724 2576 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:08:16.456373 kubelet[2576]: I0302 13:08:16.455068 2576 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:08:16.456373 kubelet[2576]: I0302 13:08:16.455928 2576 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:08:16.456373 kubelet[2576]: I0302 13:08:16.455971 2576 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Mar 2 13:08:16.465166 kubelet[2576]: I0302 13:08:16.465030 2576 server.go:1262] "Started kubelet" Mar 2 13:08:16.467101 kubelet[2576]: I0302 13:08:16.467040 2576 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:08:16.467179 kubelet[2576]: I0302 13:08:16.467123 2576 server_v1.go:49] "podresources" method="list" useActivePods=true Mar 2 13:08:16.467703 kubelet[2576]: I0302 13:08:16.467516 2576 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:08:16.467703 kubelet[2576]: I0302 13:08:16.467630 2576 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:08:16.469535 kubelet[2576]: I0302 13:08:16.469501 2576 server.go:310] "Adding debug handlers to kubelet server" Mar 2 13:08:16.475039 kubelet[2576]: I0302 13:08:16.473625 2576 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 13:08:16.475039 kubelet[2576]: I0302 13:08:16.474087 2576 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:08:16.475039 kubelet[2576]: I0302 13:08:16.474158 2576 volume_manager.go:313] "Starting Kubelet Volume Manager" Mar 2 13:08:16.475039 kubelet[2576]: I0302 13:08:16.474290 2576 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 2 13:08:16.475039 kubelet[2576]: I0302 13:08:16.474577 2576 reconciler.go:29] "Reconciler: start to sync state" Mar 2 13:08:16.480584 kubelet[2576]: I0302 13:08:16.480387 2576 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:08:16.481346 kubelet[2576]: I0302 13:08:16.480566 2576 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:08:16.488053 kubelet[2576]: E0302 13:08:16.483985 2576 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:08:16.490970 kubelet[2576]: E0302 13:08:16.488762 2576 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:08:16.498970 kubelet[2576]: I0302 13:08:16.498407 2576 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:08:16.501948 sudo[2597]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 2 13:08:16.504282 sudo[2597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 2 13:08:16.530569 kubelet[2576]: I0302 13:08:16.530339 2576 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Mar 2 13:08:16.547411 kubelet[2576]: I0302 13:08:16.547069 2576 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Mar 2 13:08:16.547411 kubelet[2576]: I0302 13:08:16.547100 2576 status_manager.go:244] "Starting to sync pod status with apiserver" Mar 2 13:08:16.547411 kubelet[2576]: I0302 13:08:16.547125 2576 kubelet.go:2428] "Starting kubelet main sync loop" Mar 2 13:08:16.547411 kubelet[2576]: E0302 13:08:16.547241 2576 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:08:16.586984 kubelet[2576]: I0302 13:08:16.586632 2576 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 13:08:16.586984 kubelet[2576]: I0302 13:08:16.586694 2576 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 13:08:16.586984 kubelet[2576]: I0302 13:08:16.586722 2576 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:08:16.587183 kubelet[2576]: I0302 13:08:16.587003 2576 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 2 13:08:16.587183 kubelet[2576]: I0302 13:08:16.587020 2576 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 2 13:08:16.587183 kubelet[2576]: I0302 13:08:16.587052 2576 policy_none.go:49] "None policy: Start" Mar 2 13:08:16.587183 kubelet[2576]: I0302 13:08:16.587066 2576 memory_manager.go:187] "Starting memorymanager" policy="None" Mar 2 13:08:16.587183 kubelet[2576]: I0302 13:08:16.587083 2576 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Mar 2 13:08:16.587183 kubelet[2576]: I0302 13:08:16.587295 2576 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Mar 2 13:08:16.587529 kubelet[2576]: I0302 13:08:16.587312 2576 policy_none.go:47] "Start" Mar 2 13:08:16.611815 kubelet[2576]: E0302 13:08:16.611528 2576 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:08:16.613337 kubelet[2576]: I0302 13:08:16.613149 2576 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 13:08:16.613337 kubelet[2576]: I0302 13:08:16.613175 2576 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:08:16.614128 kubelet[2576]: I0302 13:08:16.613699 2576 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 13:08:16.617032 kubelet[2576]: E0302 13:08:16.616673 2576 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:08:16.649037 kubelet[2576]: I0302 13:08:16.648332 2576 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:08:16.649037 kubelet[2576]: I0302 13:08:16.648684 2576 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:08:16.649037 kubelet[2576]: I0302 13:08:16.649005 2576 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:16.674949 kubelet[2576]: E0302 13:08:16.674743 2576 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 2 13:08:16.675394 kubelet[2576]: I0302 13:08:16.675358 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b949c15d364f968f7ca7c3d0e70a550-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3b949c15d364f968f7ca7c3d0e70a550\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:08:16.675482 kubelet[2576]: I0302 13:08:16.675405 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:16.675482 kubelet[2576]: I0302 13:08:16.675434 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:16.675482 kubelet[2576]: I0302 13:08:16.675452 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:16.675482 kubelet[2576]: I0302 13:08:16.675470 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/89efda49e166906783d8d868d41ebb86-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"89efda49e166906783d8d868d41ebb86\") " pod="kube-system/kube-scheduler-localhost" Mar 2 13:08:16.675631 kubelet[2576]: I0302 13:08:16.675487 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b949c15d364f968f7ca7c3d0e70a550-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b949c15d364f968f7ca7c3d0e70a550\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:08:16.675631 kubelet[2576]: I0302 13:08:16.675509 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b949c15d364f968f7ca7c3d0e70a550-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b949c15d364f968f7ca7c3d0e70a550\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:08:16.675631 kubelet[2576]: I0302 13:08:16.675605 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:16.675631 kubelet[2576]: I0302 13:08:16.675624 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/db0989cdb653dfec284dd4f35625e9e7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"db0989cdb653dfec284dd4f35625e9e7\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:16.677721 kubelet[2576]: E0302 13:08:16.677457 2576 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:08:16.678578 kubelet[2576]: E0302 13:08:16.678507 2576 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 2 13:08:16.738308 kubelet[2576]: I0302 13:08:16.738055 2576 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:08:16.766975 kubelet[2576]: I0302 13:08:16.765508 2576 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 2 13:08:16.766975 kubelet[2576]: I0302 13:08:16.765610 2576 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 13:08:16.975478 kubelet[2576]: E0302 13:08:16.975253 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:16.979038 kubelet[2576]: E0302 13:08:16.978814 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:16.979038 kubelet[2576]: E0302 13:08:16.978940 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:17.275929 sudo[2597]: pam_unix(sudo:session): session closed for user root Mar 2 13:08:17.453042 kubelet[2576]: I0302 13:08:17.452734 2576 apiserver.go:52] "Watching apiserver" Mar 2 13:08:17.475803 kubelet[2576]: I0302 13:08:17.475588 2576 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 2 13:08:17.517938 kubelet[2576]: I0302 13:08:17.517731 2576 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 13:08:17.519803 containerd[1451]: time="2026-03-02T13:08:17.519703896Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 13:08:17.523310 kubelet[2576]: I0302 13:08:17.523093 2576 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 13:08:17.581775 kubelet[2576]: I0302 13:08:17.580060 2576 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:08:17.581775 kubelet[2576]: E0302 13:08:17.580259 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:17.581775 kubelet[2576]: I0302 13:08:17.581097 2576 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:08:17.606552 kubelet[2576]: E0302 13:08:17.605987 2576 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 2 13:08:17.607612 kubelet[2576]: E0302 13:08:17.606799 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:17.609051 kubelet[2576]: E0302 13:08:17.607161 2576 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 2 13:08:17.609051 kubelet[2576]: E0302 13:08:17.608576 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:17.686055 kubelet[2576]: I0302 13:08:17.684748 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.684725014 podStartE2EDuration="2.684725014s" podCreationTimestamp="2026-03-02 13:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:08:17.653989249 +0000 UTC m=+1.321572373" watchObservedRunningTime="2026-03-02 13:08:17.684725014 +0000 UTC m=+1.352308138" Mar 2 13:08:17.706955 kubelet[2576]: I0302 13:08:17.706700 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.706589998 podStartE2EDuration="2.706589998s" podCreationTimestamp="2026-03-02 13:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:08:17.685153984 +0000 UTC m=+1.352737128" watchObservedRunningTime="2026-03-02 13:08:17.706589998 +0000 UTC m=+1.374173122" Mar 2 13:08:17.741130 kubelet[2576]: I0302 13:08:17.740910 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.740802274 podStartE2EDuration="2.740802274s" podCreationTimestamp="2026-03-02 13:08:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:08:17.708494849 +0000 UTC m=+1.376077972" watchObservedRunningTime="2026-03-02 13:08:17.740802274 +0000 UTC m=+1.408385397" Mar 2 13:08:18.587125 kubelet[2576]: I0302 13:08:18.586969 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e5c09aea-ffb1-482f-b7e0-e4f1af946947-kube-proxy\") pod \"kube-proxy-cxvpb\" (UID: \"e5c09aea-ffb1-482f-b7e0-e4f1af946947\") " pod="kube-system/kube-proxy-cxvpb" Mar 2 13:08:18.587125 kubelet[2576]: I0302 13:08:18.587017 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5c09aea-ffb1-482f-b7e0-e4f1af946947-xtables-lock\") pod \"kube-proxy-cxvpb\" (UID: \"e5c09aea-ffb1-482f-b7e0-e4f1af946947\") " pod="kube-system/kube-proxy-cxvpb" Mar 2 13:08:18.587125 kubelet[2576]: I0302 13:08:18.587038 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5c09aea-ffb1-482f-b7e0-e4f1af946947-lib-modules\") pod \"kube-proxy-cxvpb\" (UID: \"e5c09aea-ffb1-482f-b7e0-e4f1af946947\") " pod="kube-system/kube-proxy-cxvpb" Mar 2 13:08:18.587125 kubelet[2576]: I0302 13:08:18.587058 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcxkx\" (UniqueName: \"kubernetes.io/projected/e5c09aea-ffb1-482f-b7e0-e4f1af946947-kube-api-access-qcxkx\") pod \"kube-proxy-cxvpb\" (UID: \"e5c09aea-ffb1-482f-b7e0-e4f1af946947\") " pod="kube-system/kube-proxy-cxvpb" Mar 2 13:08:18.592485 kubelet[2576]: E0302 13:08:18.591455 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:18.592485 kubelet[2576]: E0302 13:08:18.591670 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:18.605570 systemd[1]: Created slice kubepods-besteffort-pode5c09aea_ffb1_482f_b7e0_e4f1af946947.slice - libcontainer container kubepods-besteffort-pode5c09aea_ffb1_482f_b7e0_e4f1af946947.slice. Mar 2 13:08:18.649026 systemd[1]: Created slice kubepods-burstable-pod34e43ca1_9eba_4b7f_8715_05a8e14ab597.slice - libcontainer container kubepods-burstable-pod34e43ca1_9eba_4b7f_8715_05a8e14ab597.slice. Mar 2 13:08:18.789090 kubelet[2576]: I0302 13:08:18.788628 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-cni-path\") pod \"cilium-mbrhz\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " pod="kube-system/cilium-mbrhz" Mar 2 13:08:18.789090 kubelet[2576]: I0302 13:08:18.788777 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34e43ca1-9eba-4b7f-8715-05a8e14ab597-clustermesh-secrets\") pod \"cilium-mbrhz\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " pod="kube-system/cilium-mbrhz" Mar 2 13:08:18.789357 kubelet[2576]: I0302 13:08:18.789187 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-host-proc-sys-kernel\") pod \"cilium-mbrhz\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " pod="kube-system/cilium-mbrhz" Mar 2 13:08:18.789357 kubelet[2576]: I0302 13:08:18.789276 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-bpf-maps\") pod \"cilium-mbrhz\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " pod="kube-system/cilium-mbrhz" Mar 2 13:08:18.789357 kubelet[2576]: I0302 13:08:18.789299 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-hostproc\") pod \"cilium-mbrhz\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " pod="kube-system/cilium-mbrhz" Mar 2 13:08:18.789357 kubelet[2576]: I0302 13:08:18.789318 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-cilium-run\") pod \"cilium-mbrhz\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " pod="kube-system/cilium-mbrhz" Mar 2 13:08:18.789357 kubelet[2576]: I0302 13:08:18.789338 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-cilium-cgroup\") pod \"cilium-mbrhz\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " pod="kube-system/cilium-mbrhz" Mar 2 13:08:18.789357 kubelet[2576]: I0302 13:08:18.789357 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-xtables-lock\") pod \"cilium-mbrhz\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " pod="kube-system/cilium-mbrhz" Mar 2 13:08:18.789932 kubelet[2576]: I0302 13:08:18.789378 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34e43ca1-9eba-4b7f-8715-05a8e14ab597-cilium-config-path\") pod \"cilium-mbrhz\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " pod="kube-system/cilium-mbrhz" Mar 2 13:08:18.789932 kubelet[2576]: I0302 13:08:18.789399 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-host-proc-sys-net\") pod \"cilium-mbrhz\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " pod="kube-system/cilium-mbrhz" Mar 2 13:08:18.789932 kubelet[2576]: I0302 13:08:18.789513 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34e43ca1-9eba-4b7f-8715-05a8e14ab597-hubble-tls\") pod \"cilium-mbrhz\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " pod="kube-system/cilium-mbrhz" Mar 2 13:08:18.789932 kubelet[2576]: I0302 13:08:18.789536 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nwz5\" (UniqueName: \"kubernetes.io/projected/34e43ca1-9eba-4b7f-8715-05a8e14ab597-kube-api-access-6nwz5\") pod \"cilium-mbrhz\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " pod="kube-system/cilium-mbrhz" Mar 2 13:08:18.789932 kubelet[2576]: I0302 13:08:18.789561 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-etc-cni-netd\") pod \"cilium-mbrhz\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " pod="kube-system/cilium-mbrhz" Mar 2 13:08:18.789932 kubelet[2576]: I0302 13:08:18.789592 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-lib-modules\") pod \"cilium-mbrhz\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " pod="kube-system/cilium-mbrhz" Mar 2 13:08:18.794144 systemd[1]: Created slice kubepods-besteffort-podc41f0d9f_e90d_42b4_8326_304f55aa778f.slice - libcontainer container kubepods-besteffort-podc41f0d9f_e90d_42b4_8326_304f55aa778f.slice. Mar 2 13:08:18.891547 kubelet[2576]: I0302 13:08:18.890990 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c41f0d9f-e90d-42b4-8326-304f55aa778f-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-25qfs\" (UID: \"c41f0d9f-e90d-42b4-8326-304f55aa778f\") " pod="kube-system/cilium-operator-6f9c7c5859-25qfs" Mar 2 13:08:18.891547 kubelet[2576]: I0302 13:08:18.891306 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjdpn\" (UniqueName: \"kubernetes.io/projected/c41f0d9f-e90d-42b4-8326-304f55aa778f-kube-api-access-kjdpn\") pod \"cilium-operator-6f9c7c5859-25qfs\" (UID: \"c41f0d9f-e90d-42b4-8326-304f55aa778f\") " pod="kube-system/cilium-operator-6f9c7c5859-25qfs" Mar 2 13:08:18.942913 kubelet[2576]: E0302 13:08:18.941909 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:18.943323 containerd[1451]: time="2026-03-02T13:08:18.943162635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cxvpb,Uid:e5c09aea-ffb1-482f-b7e0-e4f1af946947,Namespace:kube-system,Attempt:0,}" Mar 2 13:08:18.964934 kubelet[2576]: E0302 13:08:18.964735 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:18.966347 containerd[1451]: time="2026-03-02T13:08:18.966153509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mbrhz,Uid:34e43ca1-9eba-4b7f-8715-05a8e14ab597,Namespace:kube-system,Attempt:0,}" Mar 2 13:08:19.023756 containerd[1451]: time="2026-03-02T13:08:19.023131804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:08:19.023756 containerd[1451]: time="2026-03-02T13:08:19.023257909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:08:19.023756 containerd[1451]: time="2026-03-02T13:08:19.023275421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:19.023756 containerd[1451]: time="2026-03-02T13:08:19.023363675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:19.031737 containerd[1451]: time="2026-03-02T13:08:19.031349643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:08:19.031737 containerd[1451]: time="2026-03-02T13:08:19.031464457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:08:19.031737 containerd[1451]: time="2026-03-02T13:08:19.031479034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:19.031737 containerd[1451]: time="2026-03-02T13:08:19.031582586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:19.054351 systemd[1]: Started cri-containerd-40938f0dff5a95f3872fe18f289c1aca8b8edfbf806da5fcbea43bea75e2c2f3.scope - libcontainer container 40938f0dff5a95f3872fe18f289c1aca8b8edfbf806da5fcbea43bea75e2c2f3. Mar 2 13:08:19.071149 systemd[1]: Started cri-containerd-8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb.scope - libcontainer container 8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb. Mar 2 13:08:19.109373 kubelet[2576]: E0302 13:08:19.108567 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:19.113744 containerd[1451]: time="2026-03-02T13:08:19.113699011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-25qfs,Uid:c41f0d9f-e90d-42b4-8326-304f55aa778f,Namespace:kube-system,Attempt:0,}" Mar 2 13:08:19.134552 containerd[1451]: time="2026-03-02T13:08:19.134327542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mbrhz,Uid:34e43ca1-9eba-4b7f-8715-05a8e14ab597,Namespace:kube-system,Attempt:0,} returns sandbox id \"8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb\"" Mar 2 13:08:19.139031 kubelet[2576]: E0302 13:08:19.138079 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:19.144721 containerd[1451]: time="2026-03-02T13:08:19.144467348Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 2 13:08:19.152974 containerd[1451]: time="2026-03-02T13:08:19.152136981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-cxvpb,Uid:e5c09aea-ffb1-482f-b7e0-e4f1af946947,Namespace:kube-system,Attempt:0,} returns sandbox id \"40938f0dff5a95f3872fe18f289c1aca8b8edfbf806da5fcbea43bea75e2c2f3\"" Mar 2 13:08:19.153678 kubelet[2576]: E0302 13:08:19.153579 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:19.178672 containerd[1451]: time="2026-03-02T13:08:19.178588379Z" level=info msg="CreateContainer within sandbox \"40938f0dff5a95f3872fe18f289c1aca8b8edfbf806da5fcbea43bea75e2c2f3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 13:08:19.231336 containerd[1451]: time="2026-03-02T13:08:19.230904476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:08:19.232550 containerd[1451]: time="2026-03-02T13:08:19.232104855Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:08:19.232550 containerd[1451]: time="2026-03-02T13:08:19.232188150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:19.232550 containerd[1451]: time="2026-03-02T13:08:19.232495501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:19.238096 containerd[1451]: time="2026-03-02T13:08:19.237984862Z" level=info msg="CreateContainer within sandbox \"40938f0dff5a95f3872fe18f289c1aca8b8edfbf806da5fcbea43bea75e2c2f3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"23ee139394e8796ce943e63b916e9b745a51d5d5c4978eeb7333b6d0ef3a4d2b\"" Mar 2 13:08:19.239999 containerd[1451]: time="2026-03-02T13:08:19.239538387Z" level=info msg="StartContainer for \"23ee139394e8796ce943e63b916e9b745a51d5d5c4978eeb7333b6d0ef3a4d2b\"" Mar 2 13:08:19.388539 systemd[1]: Started cri-containerd-8916c348a51f6a15dab4977d69b2f64caceb557d207b3639e40334a82b5fe860.scope - libcontainer container 8916c348a51f6a15dab4977d69b2f64caceb557d207b3639e40334a82b5fe860. Mar 2 13:08:19.507382 systemd[1]: Started cri-containerd-23ee139394e8796ce943e63b916e9b745a51d5d5c4978eeb7333b6d0ef3a4d2b.scope - libcontainer container 23ee139394e8796ce943e63b916e9b745a51d5d5c4978eeb7333b6d0ef3a4d2b. Mar 2 13:08:19.872065 containerd[1451]: time="2026-03-02T13:08:19.870297114Z" level=info msg="StartContainer for \"23ee139394e8796ce943e63b916e9b745a51d5d5c4978eeb7333b6d0ef3a4d2b\" returns successfully" Mar 2 13:08:19.873994 containerd[1451]: time="2026-03-02T13:08:19.873680238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-25qfs,Uid:c41f0d9f-e90d-42b4-8326-304f55aa778f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8916c348a51f6a15dab4977d69b2f64caceb557d207b3639e40334a82b5fe860\"" Mar 2 13:08:19.877317 kubelet[2576]: E0302 13:08:19.877291 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:19.883644 kubelet[2576]: E0302 13:08:19.883316 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:20.682918 kubelet[2576]: E0302 13:08:20.680164 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:20.697376 kubelet[2576]: E0302 13:08:20.696585 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:20.790697 kubelet[2576]: I0302 13:08:20.785129 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-cxvpb" podStartSLOduration=2.785101828 podStartE2EDuration="2.785101828s" podCreationTimestamp="2026-03-02 13:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:08:20.78454876 +0000 UTC m=+4.452131894" watchObservedRunningTime="2026-03-02 13:08:20.785101828 +0000 UTC m=+4.452684951" Mar 2 13:08:22.168097 kubelet[2576]: E0302 13:08:22.166537 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:23.666044 kubelet[2576]: E0302 13:08:23.615363 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:24.384946 kubelet[2576]: E0302 13:08:24.379626 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:26.893950 kubelet[2576]: E0302 13:08:26.893262 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:27.395385 kubelet[2576]: E0302 13:08:27.395137 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:32.296743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount520741176.mount: Deactivated successfully. Mar 2 13:08:35.480228 containerd[1451]: time="2026-03-02T13:08:35.479061251Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:35.480228 containerd[1451]: time="2026-03-02T13:08:35.480068309Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 2 13:08:35.482383 containerd[1451]: time="2026-03-02T13:08:35.482326323Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:35.485708 containerd[1451]: time="2026-03-02T13:08:35.485539371Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 16.340406376s" Mar 2 13:08:35.485708 containerd[1451]: time="2026-03-02T13:08:35.485580402Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 2 13:08:35.488448 containerd[1451]: time="2026-03-02T13:08:35.488005343Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 2 13:08:35.497615 containerd[1451]: time="2026-03-02T13:08:35.497567709Z" level=info msg="CreateContainer within sandbox \"8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 2 13:08:35.537881 containerd[1451]: time="2026-03-02T13:08:35.537754090Z" level=info msg="CreateContainer within sandbox \"8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2\"" Mar 2 13:08:35.539096 containerd[1451]: time="2026-03-02T13:08:35.538920279Z" level=info msg="StartContainer for \"b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2\"" Mar 2 13:08:35.668280 systemd[1]: Started cri-containerd-b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2.scope - libcontainer container b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2. Mar 2 13:08:35.745304 containerd[1451]: time="2026-03-02T13:08:35.745063393Z" level=info msg="StartContainer for \"b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2\" returns successfully" Mar 2 13:08:35.782028 systemd[1]: cri-containerd-b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2.scope: Deactivated successfully. Mar 2 13:08:36.046344 containerd[1451]: time="2026-03-02T13:08:36.041786658Z" level=info msg="shim disconnected" id=b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2 namespace=k8s.io Mar 2 13:08:36.046344 containerd[1451]: time="2026-03-02T13:08:36.046305211Z" level=warning msg="cleaning up after shim disconnected" id=b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2 namespace=k8s.io Mar 2 13:08:36.046344 containerd[1451]: time="2026-03-02T13:08:36.046322766Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:08:36.448550 kubelet[2576]: E0302 13:08:36.446980 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:36.465475 containerd[1451]: time="2026-03-02T13:08:36.465097640Z" level=info msg="CreateContainer within sandbox \"8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 2 13:08:36.528298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2-rootfs.mount: Deactivated successfully. Mar 2 13:08:36.591945 containerd[1451]: time="2026-03-02T13:08:36.591391009Z" level=info msg="CreateContainer within sandbox \"8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c\"" Mar 2 13:08:36.598450 containerd[1451]: time="2026-03-02T13:08:36.598380226Z" level=info msg="StartContainer for \"451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c\"" Mar 2 13:08:36.726173 systemd[1]: Started cri-containerd-451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c.scope - libcontainer container 451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c. Mar 2 13:08:36.794547 containerd[1451]: time="2026-03-02T13:08:36.793292092Z" level=info msg="StartContainer for \"451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c\" returns successfully" Mar 2 13:08:36.819712 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:08:36.820263 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:08:36.820355 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:08:36.830576 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:08:36.832012 systemd[1]: cri-containerd-451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c.scope: Deactivated successfully. Mar 2 13:08:36.923793 containerd[1451]: time="2026-03-02T13:08:36.923495925Z" level=info msg="shim disconnected" id=451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c namespace=k8s.io Mar 2 13:08:36.923793 containerd[1451]: time="2026-03-02T13:08:36.923606291Z" level=warning msg="cleaning up after shim disconnected" id=451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c namespace=k8s.io Mar 2 13:08:36.923793 containerd[1451]: time="2026-03-02T13:08:36.923624316Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:08:36.936308 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:08:37.458793 kubelet[2576]: E0302 13:08:37.458542 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:37.484332 containerd[1451]: time="2026-03-02T13:08:37.484206141Z" level=info msg="CreateContainer within sandbox \"8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 2 13:08:37.527977 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c-rootfs.mount: Deactivated successfully. Mar 2 13:08:37.534292 containerd[1451]: time="2026-03-02T13:08:37.534148782Z" level=info msg="CreateContainer within sandbox \"8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956\"" Mar 2 13:08:37.535136 containerd[1451]: time="2026-03-02T13:08:37.535004469Z" level=info msg="StartContainer for \"dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956\"" Mar 2 13:08:37.606426 systemd[1]: Started cri-containerd-dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956.scope - libcontainer container dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956. Mar 2 13:08:37.674666 containerd[1451]: time="2026-03-02T13:08:37.674600387Z" level=info msg="StartContainer for \"dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956\" returns successfully" Mar 2 13:08:37.679713 systemd[1]: cri-containerd-dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956.scope: Deactivated successfully. Mar 2 13:08:37.759210 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956-rootfs.mount: Deactivated successfully. Mar 2 13:08:37.816612 containerd[1451]: time="2026-03-02T13:08:37.816518789Z" level=info msg="shim disconnected" id=dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956 namespace=k8s.io Mar 2 13:08:37.816612 containerd[1451]: time="2026-03-02T13:08:37.816589256Z" level=warning msg="cleaning up after shim disconnected" id=dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956 namespace=k8s.io Mar 2 13:08:37.816612 containerd[1451]: time="2026-03-02T13:08:37.816603886Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:08:37.882590 containerd[1451]: time="2026-03-02T13:08:37.882130190Z" level=warning msg="cleanup warnings time=\"2026-03-02T13:08:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 2 13:08:37.932078 containerd[1451]: time="2026-03-02T13:08:37.931501700Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:37.936481 containerd[1451]: time="2026-03-02T13:08:37.936219601Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 2 13:08:37.940603 containerd[1451]: time="2026-03-02T13:08:37.940495396Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:08:37.945317 containerd[1451]: time="2026-03-02T13:08:37.945201985Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.457150673s" Mar 2 13:08:37.945317 containerd[1451]: time="2026-03-02T13:08:37.945251702Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 2 13:08:37.956437 containerd[1451]: time="2026-03-02T13:08:37.956363788Z" level=info msg="CreateContainer within sandbox \"8916c348a51f6a15dab4977d69b2f64caceb557d207b3639e40334a82b5fe860\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 2 13:08:37.994710 containerd[1451]: time="2026-03-02T13:08:37.994579611Z" level=info msg="CreateContainer within sandbox \"8916c348a51f6a15dab4977d69b2f64caceb557d207b3639e40334a82b5fe860\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c\"" Mar 2 13:08:37.998124 containerd[1451]: time="2026-03-02T13:08:37.997187164Z" level=info msg="StartContainer for \"6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c\"" Mar 2 13:08:38.087487 systemd[1]: Started cri-containerd-6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c.scope - libcontainer container 6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c. Mar 2 13:08:38.170184 containerd[1451]: time="2026-03-02T13:08:38.169594369Z" level=info msg="StartContainer for \"6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c\" returns successfully" Mar 2 13:08:38.476467 kubelet[2576]: E0302 13:08:38.476153 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:38.484774 kubelet[2576]: E0302 13:08:38.484624 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:38.498074 containerd[1451]: time="2026-03-02T13:08:38.497688639Z" level=info msg="CreateContainer within sandbox \"8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 2 13:08:38.565796 containerd[1451]: time="2026-03-02T13:08:38.564212100Z" level=info msg="CreateContainer within sandbox \"8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143\"" Mar 2 13:08:38.568931 containerd[1451]: time="2026-03-02T13:08:38.566667880Z" level=info msg="StartContainer for \"3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143\"" Mar 2 13:08:38.629294 kubelet[2576]: I0302 13:08:38.628716 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-25qfs" podStartSLOduration=2.566204354 podStartE2EDuration="20.62869293s" podCreationTimestamp="2026-03-02 13:08:18 +0000 UTC" firstStartedPulling="2026-03-02 13:08:19.884774455 +0000 UTC m=+3.552357579" lastFinishedPulling="2026-03-02 13:08:37.947263031 +0000 UTC m=+21.614846155" observedRunningTime="2026-03-02 13:08:38.622793469 +0000 UTC m=+22.290376603" watchObservedRunningTime="2026-03-02 13:08:38.62869293 +0000 UTC m=+22.296276184" Mar 2 13:08:38.688215 systemd[1]: Started cri-containerd-3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143.scope - libcontainer container 3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143. Mar 2 13:08:38.835598 systemd[1]: cri-containerd-3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143.scope: Deactivated successfully. Mar 2 13:08:38.843220 containerd[1451]: time="2026-03-02T13:08:38.842968936Z" level=info msg="StartContainer for \"3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143\" returns successfully" Mar 2 13:08:38.937450 containerd[1451]: time="2026-03-02T13:08:38.936942606Z" level=info msg="shim disconnected" id=3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143 namespace=k8s.io Mar 2 13:08:38.937450 containerd[1451]: time="2026-03-02T13:08:38.937071990Z" level=warning msg="cleaning up after shim disconnected" id=3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143 namespace=k8s.io Mar 2 13:08:38.937450 containerd[1451]: time="2026-03-02T13:08:38.937090356Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:08:39.504415 kubelet[2576]: E0302 13:08:39.503689 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:39.505212 kubelet[2576]: E0302 13:08:39.504739 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:39.527205 containerd[1451]: time="2026-03-02T13:08:39.526224314Z" level=info msg="CreateContainer within sandbox \"8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 2 13:08:39.529776 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143-rootfs.mount: Deactivated successfully. Mar 2 13:08:39.576635 containerd[1451]: time="2026-03-02T13:08:39.576470949Z" level=info msg="CreateContainer within sandbox \"8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb\"" Mar 2 13:08:39.577662 containerd[1451]: time="2026-03-02T13:08:39.577573985Z" level=info msg="StartContainer for \"58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb\"" Mar 2 13:08:39.671076 systemd[1]: run-containerd-runc-k8s.io-58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb-runc.E9WOAE.mount: Deactivated successfully. Mar 2 13:08:39.689514 systemd[1]: Started cri-containerd-58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb.scope - libcontainer container 58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb. Mar 2 13:08:39.793307 containerd[1451]: time="2026-03-02T13:08:39.792973816Z" level=info msg="StartContainer for \"58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb\" returns successfully" Mar 2 13:08:39.984665 kubelet[2576]: I0302 13:08:39.984561 2576 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Mar 2 13:08:40.086526 systemd[1]: Created slice kubepods-burstable-pod5eeabc0e_1648_4465_9d0f_3b6f72561683.slice - libcontainer container kubepods-burstable-pod5eeabc0e_1648_4465_9d0f_3b6f72561683.slice. Mar 2 13:08:40.112761 systemd[1]: Created slice kubepods-burstable-pod088e3a1e_dd44_4326_850b_13e34f115091.slice - libcontainer container kubepods-burstable-pod088e3a1e_dd44_4326_850b_13e34f115091.slice. Mar 2 13:08:40.146596 kubelet[2576]: I0302 13:08:40.146501 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dfhs\" (UniqueName: \"kubernetes.io/projected/088e3a1e-dd44-4326-850b-13e34f115091-kube-api-access-2dfhs\") pod \"coredns-66bc5c9577-gbpvk\" (UID: \"088e3a1e-dd44-4326-850b-13e34f115091\") " pod="kube-system/coredns-66bc5c9577-gbpvk" Mar 2 13:08:40.146596 kubelet[2576]: I0302 13:08:40.146540 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5eeabc0e-1648-4465-9d0f-3b6f72561683-config-volume\") pod \"coredns-66bc5c9577-ccq6h\" (UID: \"5eeabc0e-1648-4465-9d0f-3b6f72561683\") " pod="kube-system/coredns-66bc5c9577-ccq6h" Mar 2 13:08:40.146596 kubelet[2576]: I0302 13:08:40.146559 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/088e3a1e-dd44-4326-850b-13e34f115091-config-volume\") pod \"coredns-66bc5c9577-gbpvk\" (UID: \"088e3a1e-dd44-4326-850b-13e34f115091\") " pod="kube-system/coredns-66bc5c9577-gbpvk" Mar 2 13:08:40.146596 kubelet[2576]: I0302 13:08:40.146573 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q97mk\" (UniqueName: \"kubernetes.io/projected/5eeabc0e-1648-4465-9d0f-3b6f72561683-kube-api-access-q97mk\") pod \"coredns-66bc5c9577-ccq6h\" (UID: \"5eeabc0e-1648-4465-9d0f-3b6f72561683\") " pod="kube-system/coredns-66bc5c9577-ccq6h" Mar 2 13:08:40.406933 kubelet[2576]: E0302 13:08:40.406572 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:40.410579 containerd[1451]: time="2026-03-02T13:08:40.410494037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ccq6h,Uid:5eeabc0e-1648-4465-9d0f-3b6f72561683,Namespace:kube-system,Attempt:0,}" Mar 2 13:08:40.426120 kubelet[2576]: E0302 13:08:40.424590 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:40.427509 containerd[1451]: time="2026-03-02T13:08:40.427334277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gbpvk,Uid:088e3a1e-dd44-4326-850b-13e34f115091,Namespace:kube-system,Attempt:0,}" Mar 2 13:08:40.521675 kubelet[2576]: E0302 13:08:40.521501 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:40.598624 kubelet[2576]: I0302 13:08:40.595688 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mbrhz" podStartSLOduration=6.25013625 podStartE2EDuration="22.595671131s" podCreationTimestamp="2026-03-02 13:08:18 +0000 UTC" firstStartedPulling="2026-03-02 13:08:19.141999358 +0000 UTC m=+2.809582483" lastFinishedPulling="2026-03-02 13:08:35.487534239 +0000 UTC m=+19.155117364" observedRunningTime="2026-03-02 13:08:40.591632778 +0000 UTC m=+24.259215922" watchObservedRunningTime="2026-03-02 13:08:40.595671131 +0000 UTC m=+24.263254275" Mar 2 13:08:41.530387 kubelet[2576]: E0302 13:08:41.529657 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:42.533048 kubelet[2576]: E0302 13:08:42.532340 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:42.725448 systemd-networkd[1374]: cilium_host: Link UP Mar 2 13:08:42.727487 systemd-networkd[1374]: cilium_net: Link UP Mar 2 13:08:42.729977 systemd-networkd[1374]: cilium_net: Gained carrier Mar 2 13:08:42.730530 systemd-networkd[1374]: cilium_host: Gained carrier Mar 2 13:08:43.040781 systemd-networkd[1374]: cilium_vxlan: Link UP Mar 2 13:08:43.040793 systemd-networkd[1374]: cilium_vxlan: Gained carrier Mar 2 13:08:43.128334 systemd-networkd[1374]: cilium_net: Gained IPv6LL Mar 2 13:08:43.460394 systemd-networkd[1374]: cilium_host: Gained IPv6LL Mar 2 13:08:43.512327 kernel: NET: Registered PF_ALG protocol family Mar 2 13:08:44.547200 systemd-networkd[1374]: cilium_vxlan: Gained IPv6LL Mar 2 13:08:44.995462 systemd-networkd[1374]: lxc_health: Link UP Mar 2 13:08:45.013161 systemd-networkd[1374]: lxc_health: Gained carrier Mar 2 13:08:45.415730 systemd[1]: run-containerd-runc-k8s.io-58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb-runc.Mj2642.mount: Deactivated successfully. Mar 2 13:08:45.657175 systemd-networkd[1374]: lxcca098fc23f2e: Link UP Mar 2 13:08:45.668163 kernel: eth0: renamed from tmpf4334 Mar 2 13:08:45.693962 systemd-networkd[1374]: lxcb4e90e301914: Link UP Mar 2 13:08:45.699048 kernel: eth0: renamed from tmpdf1ab Mar 2 13:08:45.707202 systemd-networkd[1374]: lxcca098fc23f2e: Gained carrier Mar 2 13:08:45.710504 systemd-networkd[1374]: lxcb4e90e301914: Gained carrier Mar 2 13:08:46.530361 systemd-networkd[1374]: lxc_health: Gained IPv6LL Mar 2 13:08:46.964186 kubelet[2576]: E0302 13:08:46.963512 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:47.362461 systemd-networkd[1374]: lxcca098fc23f2e: Gained IPv6LL Mar 2 13:08:47.576966 kubelet[2576]: E0302 13:08:47.572969 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:47.683220 systemd-networkd[1374]: lxcb4e90e301914: Gained IPv6LL Mar 2 13:08:48.570706 kubelet[2576]: E0302 13:08:48.570409 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:51.174236 containerd[1451]: time="2026-03-02T13:08:51.172654913Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:08:51.175528 containerd[1451]: time="2026-03-02T13:08:51.175035919Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:08:51.175528 containerd[1451]: time="2026-03-02T13:08:51.175258921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:51.175528 containerd[1451]: time="2026-03-02T13:08:51.175428978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:51.176965 containerd[1451]: time="2026-03-02T13:08:51.176299480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:08:51.176965 containerd[1451]: time="2026-03-02T13:08:51.176524153Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:08:51.176965 containerd[1451]: time="2026-03-02T13:08:51.176612213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:51.176965 containerd[1451]: time="2026-03-02T13:08:51.176776800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:08:51.234233 systemd[1]: run-containerd-runc-k8s.io-df1aba5303880018114c13370530375676cbf48d4db624efeb52087347bd1c22-runc.QpfOk4.mount: Deactivated successfully. Mar 2 13:08:51.261330 systemd[1]: Started cri-containerd-df1aba5303880018114c13370530375676cbf48d4db624efeb52087347bd1c22.scope - libcontainer container df1aba5303880018114c13370530375676cbf48d4db624efeb52087347bd1c22. Mar 2 13:08:51.265347 systemd[1]: Started cri-containerd-f43341e42b7aec0022e8602ec1eb81e1ddf087ee89ed18018522e4988e8e1c20.scope - libcontainer container f43341e42b7aec0022e8602ec1eb81e1ddf087ee89ed18018522e4988e8e1c20. Mar 2 13:08:51.296798 systemd-resolved[1376]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:08:51.304424 systemd-resolved[1376]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:08:51.364712 containerd[1451]: time="2026-03-02T13:08:51.364661127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-ccq6h,Uid:5eeabc0e-1648-4465-9d0f-3b6f72561683,Namespace:kube-system,Attempt:0,} returns sandbox id \"df1aba5303880018114c13370530375676cbf48d4db624efeb52087347bd1c22\"" Mar 2 13:08:51.366596 kubelet[2576]: E0302 13:08:51.366271 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:51.375679 containerd[1451]: time="2026-03-02T13:08:51.375468479Z" level=info msg="CreateContainer within sandbox \"df1aba5303880018114c13370530375676cbf48d4db624efeb52087347bd1c22\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:08:51.380910 containerd[1451]: time="2026-03-02T13:08:51.380744537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-gbpvk,Uid:088e3a1e-dd44-4326-850b-13e34f115091,Namespace:kube-system,Attempt:0,} returns sandbox id \"f43341e42b7aec0022e8602ec1eb81e1ddf087ee89ed18018522e4988e8e1c20\"" Mar 2 13:08:51.382625 kubelet[2576]: E0302 13:08:51.382521 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:51.393086 containerd[1451]: time="2026-03-02T13:08:51.392730810Z" level=info msg="CreateContainer within sandbox \"f43341e42b7aec0022e8602ec1eb81e1ddf087ee89ed18018522e4988e8e1c20\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:08:51.426176 containerd[1451]: time="2026-03-02T13:08:51.425622683Z" level=info msg="CreateContainer within sandbox \"df1aba5303880018114c13370530375676cbf48d4db624efeb52087347bd1c22\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0273ef99253b56cd480242ad6346efffb763e25c5bb0a62170d2f80dc7db3f1b\"" Mar 2 13:08:51.431025 containerd[1451]: time="2026-03-02T13:08:51.429448727Z" level=info msg="StartContainer for \"0273ef99253b56cd480242ad6346efffb763e25c5bb0a62170d2f80dc7db3f1b\"" Mar 2 13:08:51.469428 containerd[1451]: time="2026-03-02T13:08:51.469211375Z" level=info msg="CreateContainer within sandbox \"f43341e42b7aec0022e8602ec1eb81e1ddf087ee89ed18018522e4988e8e1c20\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"378fcfcec6743f04b315d131a640d482e46d5722db1aa479ad6971c5d9f357e5\"" Mar 2 13:08:51.472265 containerd[1451]: time="2026-03-02T13:08:51.470642264Z" level=info msg="StartContainer for \"378fcfcec6743f04b315d131a640d482e46d5722db1aa479ad6971c5d9f357e5\"" Mar 2 13:08:51.506319 systemd[1]: Started cri-containerd-0273ef99253b56cd480242ad6346efffb763e25c5bb0a62170d2f80dc7db3f1b.scope - libcontainer container 0273ef99253b56cd480242ad6346efffb763e25c5bb0a62170d2f80dc7db3f1b. Mar 2 13:08:51.550392 systemd[1]: Started cri-containerd-378fcfcec6743f04b315d131a640d482e46d5722db1aa479ad6971c5d9f357e5.scope - libcontainer container 378fcfcec6743f04b315d131a640d482e46d5722db1aa479ad6971c5d9f357e5. Mar 2 13:08:51.579350 containerd[1451]: time="2026-03-02T13:08:51.579189579Z" level=info msg="StartContainer for \"0273ef99253b56cd480242ad6346efffb763e25c5bb0a62170d2f80dc7db3f1b\" returns successfully" Mar 2 13:08:51.608789 kubelet[2576]: E0302 13:08:51.608546 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:51.649566 containerd[1451]: time="2026-03-02T13:08:51.649321903Z" level=info msg="StartContainer for \"378fcfcec6743f04b315d131a640d482e46d5722db1aa479ad6971c5d9f357e5\" returns successfully" Mar 2 13:08:52.190533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount822924670.mount: Deactivated successfully. Mar 2 13:08:52.620478 kubelet[2576]: E0302 13:08:52.620079 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:52.624440 kubelet[2576]: E0302 13:08:52.624288 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:52.677414 kubelet[2576]: I0302 13:08:52.676885 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-ccq6h" podStartSLOduration=34.67677028 podStartE2EDuration="34.67677028s" podCreationTimestamp="2026-03-02 13:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:08:51.663316196 +0000 UTC m=+35.330899320" watchObservedRunningTime="2026-03-02 13:08:52.67677028 +0000 UTC m=+36.344353414" Mar 2 13:08:52.677414 kubelet[2576]: I0302 13:08:52.677394 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gbpvk" podStartSLOduration=34.67738738 podStartE2EDuration="34.67738738s" podCreationTimestamp="2026-03-02 13:08:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:08:52.675367403 +0000 UTC m=+36.342950537" watchObservedRunningTime="2026-03-02 13:08:52.67738738 +0000 UTC m=+36.344970524" Mar 2 13:08:53.371329 sudo[1647]: pam_unix(sudo:session): session closed for user root Mar 2 13:08:53.375364 sshd[1643]: pam_unix(sshd:session): session closed for user core Mar 2 13:08:53.382685 systemd[1]: sshd@8-10.0.0.56:22-10.0.0.1:43824.service: Deactivated successfully. Mar 2 13:08:53.386508 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 13:08:53.387341 systemd[1]: session-9.scope: Consumed 12.605s CPU time, 161.6M memory peak, 0B memory swap peak. Mar 2 13:08:53.391106 systemd-logind[1433]: Session 9 logged out. Waiting for processes to exit. Mar 2 13:08:53.394792 systemd-logind[1433]: Removed session 9. Mar 2 13:08:53.622985 kubelet[2576]: E0302 13:08:53.622467 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:53.624682 kubelet[2576]: E0302 13:08:53.623583 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:08:54.631005 kubelet[2576]: E0302 13:08:54.630439 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:21.549587 kubelet[2576]: E0302 13:09:21.549299 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:29.550975 kubelet[2576]: E0302 13:09:29.550448 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:39.560569 kubelet[2576]: E0302 13:09:39.559991 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:49.564157 kubelet[2576]: E0302 13:09:49.561222 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:09:53.589169 kubelet[2576]: E0302 13:09:53.580245 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:10:00.631925 systemd[1]: Started sshd@9-10.0.0.56:22-10.0.0.1:36530.service - OpenSSH per-connection server daemon (10.0.0.1:36530). Mar 2 13:10:00.809438 sshd[4104]: Accepted publickey for core from 10.0.0.1 port 36530 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:10:00.817347 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:00.871541 systemd-logind[1433]: New session 10 of user core. Mar 2 13:10:00.887085 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 13:10:01.509051 sshd[4104]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:01.515144 systemd[1]: sshd@9-10.0.0.56:22-10.0.0.1:36530.service: Deactivated successfully. Mar 2 13:10:01.522122 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 13:10:01.525956 systemd-logind[1433]: Session 10 logged out. Waiting for processes to exit. Mar 2 13:10:01.528981 systemd-logind[1433]: Removed session 10. Mar 2 13:10:01.552943 kubelet[2576]: E0302 13:10:01.551031 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:10:01.552943 kubelet[2576]: E0302 13:10:01.552460 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:10:06.537665 systemd[1]: Started sshd@10-10.0.0.56:22-10.0.0.1:36534.service - OpenSSH per-connection server daemon (10.0.0.1:36534). Mar 2 13:10:06.621766 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 36534 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:10:06.625761 sshd[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:06.643491 systemd-logind[1433]: New session 11 of user core. Mar 2 13:10:06.663737 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 13:10:06.887550 sshd[4124]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:06.896049 systemd[1]: sshd@10-10.0.0.56:22-10.0.0.1:36534.service: Deactivated successfully. Mar 2 13:10:06.899572 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 13:10:06.903277 systemd-logind[1433]: Session 11 logged out. Waiting for processes to exit. Mar 2 13:10:06.906148 systemd-logind[1433]: Removed session 11. Mar 2 13:10:07.554087 kubelet[2576]: E0302 13:10:07.551526 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:10:11.919539 systemd[1]: Started sshd@11-10.0.0.56:22-10.0.0.1:52794.service - OpenSSH per-connection server daemon (10.0.0.1:52794). Mar 2 13:10:12.001738 sshd[4140]: Accepted publickey for core from 10.0.0.1 port 52794 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:10:12.004738 sshd[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:12.022037 systemd-logind[1433]: New session 12 of user core. Mar 2 13:10:12.039435 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 13:10:12.365777 sshd[4140]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:12.376146 systemd[1]: sshd@11-10.0.0.56:22-10.0.0.1:52794.service: Deactivated successfully. Mar 2 13:10:12.379465 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 13:10:12.385683 systemd-logind[1433]: Session 12 logged out. Waiting for processes to exit. Mar 2 13:10:12.394160 systemd-logind[1433]: Removed session 12. Mar 2 13:10:17.390341 systemd[1]: Started sshd@12-10.0.0.56:22-10.0.0.1:52804.service - OpenSSH per-connection server daemon (10.0.0.1:52804). Mar 2 13:10:17.440394 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 52804 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:10:17.444145 sshd[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:17.457732 systemd-logind[1433]: New session 13 of user core. Mar 2 13:10:17.473420 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 13:10:17.685043 sshd[4158]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:17.690954 systemd[1]: sshd@12-10.0.0.56:22-10.0.0.1:52804.service: Deactivated successfully. Mar 2 13:10:17.694452 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 13:10:17.698248 systemd-logind[1433]: Session 13 logged out. Waiting for processes to exit. Mar 2 13:10:17.701027 systemd-logind[1433]: Removed session 13. Mar 2 13:10:22.736632 systemd[1]: Started sshd@13-10.0.0.56:22-10.0.0.1:49748.service - OpenSSH per-connection server daemon (10.0.0.1:49748). Mar 2 13:10:22.796260 sshd[4176]: Accepted publickey for core from 10.0.0.1 port 49748 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:10:22.802585 sshd[4176]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:22.818026 systemd-logind[1433]: New session 14 of user core. Mar 2 13:10:22.827670 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 13:10:23.104067 sshd[4176]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:23.113228 systemd[1]: sshd@13-10.0.0.56:22-10.0.0.1:49748.service: Deactivated successfully. Mar 2 13:10:23.116923 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 13:10:23.120272 systemd-logind[1433]: Session 14 logged out. Waiting for processes to exit. Mar 2 13:10:23.136231 systemd-logind[1433]: Removed session 14. Mar 2 13:10:28.169741 systemd[1]: Started sshd@14-10.0.0.56:22-10.0.0.1:49750.service - OpenSSH per-connection server daemon (10.0.0.1:49750). Mar 2 13:10:28.280383 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 49750 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:10:28.285544 sshd[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:28.304194 systemd-logind[1433]: New session 15 of user core. Mar 2 13:10:28.318733 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 13:10:28.754226 sshd[4191]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:28.763514 systemd[1]: sshd@14-10.0.0.56:22-10.0.0.1:49750.service: Deactivated successfully. Mar 2 13:10:28.770109 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 13:10:28.776550 systemd-logind[1433]: Session 15 logged out. Waiting for processes to exit. Mar 2 13:10:28.781656 systemd-logind[1433]: Removed session 15. Mar 2 13:10:30.557486 kubelet[2576]: E0302 13:10:30.556351 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:10:33.813192 systemd[1]: Started sshd@15-10.0.0.56:22-10.0.0.1:59678.service - OpenSSH per-connection server daemon (10.0.0.1:59678). Mar 2 13:10:33.922517 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 59678 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:10:33.932690 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:33.961580 systemd-logind[1433]: New session 16 of user core. Mar 2 13:10:33.972420 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 13:10:34.222485 sshd[4206]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:34.244160 systemd[1]: sshd@15-10.0.0.56:22-10.0.0.1:59678.service: Deactivated successfully. Mar 2 13:10:34.248517 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 13:10:34.253404 systemd-logind[1433]: Session 16 logged out. Waiting for processes to exit. Mar 2 13:10:34.263696 systemd[1]: Started sshd@16-10.0.0.56:22-10.0.0.1:59688.service - OpenSSH per-connection server daemon (10.0.0.1:59688). Mar 2 13:10:34.265589 systemd-logind[1433]: Removed session 16. Mar 2 13:10:34.328269 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 59688 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:10:34.330658 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:34.345157 systemd-logind[1433]: New session 17 of user core. Mar 2 13:10:34.361432 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 13:10:34.714030 sshd[4222]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:34.737171 systemd[1]: sshd@16-10.0.0.56:22-10.0.0.1:59688.service: Deactivated successfully. Mar 2 13:10:34.742189 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 13:10:34.747626 systemd-logind[1433]: Session 17 logged out. Waiting for processes to exit. Mar 2 13:10:34.758772 systemd[1]: Started sshd@17-10.0.0.56:22-10.0.0.1:59700.service - OpenSSH per-connection server daemon (10.0.0.1:59700). Mar 2 13:10:34.776534 systemd-logind[1433]: Removed session 17. Mar 2 13:10:34.850837 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 59700 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:10:34.859161 sshd[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:34.877655 systemd-logind[1433]: New session 18 of user core. Mar 2 13:10:34.889673 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 13:10:35.175005 sshd[4235]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:35.183552 systemd[1]: sshd@17-10.0.0.56:22-10.0.0.1:59700.service: Deactivated successfully. Mar 2 13:10:35.186756 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 13:10:35.190347 systemd-logind[1433]: Session 18 logged out. Waiting for processes to exit. Mar 2 13:10:35.194679 systemd-logind[1433]: Removed session 18. Mar 2 13:10:40.204703 systemd[1]: Started sshd@18-10.0.0.56:22-10.0.0.1:38932.service - OpenSSH per-connection server daemon (10.0.0.1:38932). Mar 2 13:10:40.273150 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 38932 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:10:40.277477 sshd[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:40.299686 systemd-logind[1433]: New session 19 of user core. Mar 2 13:10:40.312632 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 13:10:40.600465 sshd[4251]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:40.609410 systemd[1]: sshd@18-10.0.0.56:22-10.0.0.1:38932.service: Deactivated successfully. Mar 2 13:10:40.613784 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 13:10:40.616750 systemd-logind[1433]: Session 19 logged out. Waiting for processes to exit. Mar 2 13:10:40.621603 systemd-logind[1433]: Removed session 19. Mar 2 13:10:45.643964 systemd[1]: Started sshd@19-10.0.0.56:22-10.0.0.1:38946.service - OpenSSH per-connection server daemon (10.0.0.1:38946). Mar 2 13:10:45.705370 sshd[4265]: Accepted publickey for core from 10.0.0.1 port 38946 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:10:45.709009 sshd[4265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:45.720239 systemd-logind[1433]: New session 20 of user core. Mar 2 13:10:45.748492 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 13:10:46.047031 sshd[4265]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:46.055581 systemd-logind[1433]: Session 20 logged out. Waiting for processes to exit. Mar 2 13:10:46.058720 systemd[1]: sshd@19-10.0.0.56:22-10.0.0.1:38946.service: Deactivated successfully. Mar 2 13:10:46.062142 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 13:10:46.066670 systemd-logind[1433]: Removed session 20. Mar 2 13:10:47.550199 kubelet[2576]: E0302 13:10:47.549696 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:10:47.551898 kubelet[2576]: E0302 13:10:47.551601 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:10:51.079538 systemd[1]: Started sshd@20-10.0.0.56:22-10.0.0.1:48562.service - OpenSSH per-connection server daemon (10.0.0.1:48562). Mar 2 13:10:51.173286 sshd[4281]: Accepted publickey for core from 10.0.0.1 port 48562 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:10:51.175723 sshd[4281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:51.202947 systemd-logind[1433]: New session 21 of user core. Mar 2 13:10:51.212222 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 13:10:51.507967 sshd[4281]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:51.522225 systemd[1]: sshd@20-10.0.0.56:22-10.0.0.1:48562.service: Deactivated successfully. Mar 2 13:10:51.538403 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 13:10:51.541149 systemd-logind[1433]: Session 21 logged out. Waiting for processes to exit. Mar 2 13:10:51.547262 systemd-logind[1433]: Removed session 21. Mar 2 13:10:56.555517 systemd[1]: Started sshd@21-10.0.0.56:22-10.0.0.1:48566.service - OpenSSH per-connection server daemon (10.0.0.1:48566). Mar 2 13:10:56.621065 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 48566 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:10:56.638480 sshd[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:10:56.663222 systemd-logind[1433]: New session 22 of user core. Mar 2 13:10:56.680952 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 13:10:57.013576 sshd[4296]: pam_unix(sshd:session): session closed for user core Mar 2 13:10:57.039703 systemd[1]: sshd@21-10.0.0.56:22-10.0.0.1:48566.service: Deactivated successfully. Mar 2 13:10:57.045253 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 13:10:57.047352 systemd-logind[1433]: Session 22 logged out. Waiting for processes to exit. Mar 2 13:10:57.057252 systemd-logind[1433]: Removed session 22. Mar 2 13:11:02.073182 systemd[1]: Started sshd@22-10.0.0.56:22-10.0.0.1:49710.service - OpenSSH per-connection server daemon (10.0.0.1:49710). Mar 2 13:11:02.173538 sshd[4311]: Accepted publickey for core from 10.0.0.1 port 49710 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:11:02.180799 sshd[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:11:02.199545 systemd-logind[1433]: New session 23 of user core. Mar 2 13:11:02.211218 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 2 13:11:02.597115 sshd[4311]: pam_unix(sshd:session): session closed for user core Mar 2 13:11:02.610029 systemd[1]: sshd@22-10.0.0.56:22-10.0.0.1:49710.service: Deactivated successfully. Mar 2 13:11:02.614325 systemd[1]: session-23.scope: Deactivated successfully. Mar 2 13:11:02.638565 systemd-logind[1433]: Session 23 logged out. Waiting for processes to exit. Mar 2 13:11:02.645458 systemd-logind[1433]: Removed session 23. Mar 2 13:11:05.550518 kubelet[2576]: E0302 13:11:05.550334 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:11:06.557992 kubelet[2576]: E0302 13:11:06.556651 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:11:07.636440 systemd[1]: Started sshd@23-10.0.0.56:22-10.0.0.1:49722.service - OpenSSH per-connection server daemon (10.0.0.1:49722). Mar 2 13:11:07.773649 sshd[4326]: Accepted publickey for core from 10.0.0.1 port 49722 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:11:07.781787 sshd[4326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:11:07.811596 systemd-logind[1433]: New session 24 of user core. Mar 2 13:11:07.828736 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 2 13:11:08.126353 sshd[4326]: pam_unix(sshd:session): session closed for user core Mar 2 13:11:08.147582 systemd[1]: sshd@23-10.0.0.56:22-10.0.0.1:49722.service: Deactivated successfully. Mar 2 13:11:08.155794 systemd[1]: session-24.scope: Deactivated successfully. Mar 2 13:11:08.167057 systemd-logind[1433]: Session 24 logged out. Waiting for processes to exit. Mar 2 13:11:08.178277 systemd-logind[1433]: Removed session 24. Mar 2 13:11:13.146217 systemd[1]: Started sshd@24-10.0.0.56:22-10.0.0.1:60380.service - OpenSSH per-connection server daemon (10.0.0.1:60380). Mar 2 13:11:13.273071 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 60380 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:11:13.275554 sshd[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:11:13.294044 systemd-logind[1433]: New session 25 of user core. Mar 2 13:11:13.301583 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 2 13:11:13.552560 sshd[4340]: pam_unix(sshd:session): session closed for user core Mar 2 13:11:13.567463 systemd[1]: sshd@24-10.0.0.56:22-10.0.0.1:60380.service: Deactivated successfully. Mar 2 13:11:13.571267 systemd[1]: session-25.scope: Deactivated successfully. Mar 2 13:11:13.575387 systemd-logind[1433]: Session 25 logged out. Waiting for processes to exit. Mar 2 13:11:13.587602 systemd[1]: Started sshd@25-10.0.0.56:22-10.0.0.1:60394.service - OpenSSH per-connection server daemon (10.0.0.1:60394). Mar 2 13:11:13.592653 systemd-logind[1433]: Removed session 25. Mar 2 13:11:13.655788 sshd[4354]: Accepted publickey for core from 10.0.0.1 port 60394 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:11:13.661158 sshd[4354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:11:13.674200 systemd-logind[1433]: New session 26 of user core. Mar 2 13:11:13.690405 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 2 13:11:14.390554 sshd[4354]: pam_unix(sshd:session): session closed for user core Mar 2 13:11:14.408572 systemd[1]: sshd@25-10.0.0.56:22-10.0.0.1:60394.service: Deactivated successfully. Mar 2 13:11:14.412319 systemd[1]: session-26.scope: Deactivated successfully. Mar 2 13:11:14.417221 systemd-logind[1433]: Session 26 logged out. Waiting for processes to exit. Mar 2 13:11:14.438260 systemd[1]: Started sshd@26-10.0.0.56:22-10.0.0.1:60402.service - OpenSSH per-connection server daemon (10.0.0.1:60402). Mar 2 13:11:14.440572 systemd-logind[1433]: Removed session 26. Mar 2 13:11:14.511207 sshd[4366]: Accepted publickey for core from 10.0.0.1 port 60402 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:11:14.514097 sshd[4366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:11:14.524537 systemd-logind[1433]: New session 27 of user core. Mar 2 13:11:14.536780 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 2 13:11:15.478342 sshd[4366]: pam_unix(sshd:session): session closed for user core Mar 2 13:11:15.495622 systemd[1]: sshd@26-10.0.0.56:22-10.0.0.1:60402.service: Deactivated successfully. Mar 2 13:11:15.499294 systemd[1]: session-27.scope: Deactivated successfully. Mar 2 13:11:15.503635 systemd-logind[1433]: Session 27 logged out. Waiting for processes to exit. Mar 2 13:11:15.512387 systemd[1]: Started sshd@27-10.0.0.56:22-10.0.0.1:60418.service - OpenSSH per-connection server daemon (10.0.0.1:60418). Mar 2 13:11:15.514388 systemd-logind[1433]: Removed session 27. Mar 2 13:11:15.549487 kubelet[2576]: E0302 13:11:15.549347 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:11:15.592227 sshd[4386]: Accepted publickey for core from 10.0.0.1 port 60418 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:11:15.596570 sshd[4386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:11:15.615730 systemd-logind[1433]: New session 28 of user core. Mar 2 13:11:15.632556 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 2 13:11:16.188164 sshd[4386]: pam_unix(sshd:session): session closed for user core Mar 2 13:11:16.202242 systemd[1]: Started sshd@28-10.0.0.56:22-10.0.0.1:60424.service - OpenSSH per-connection server daemon (10.0.0.1:60424). Mar 2 13:11:16.208630 systemd[1]: sshd@27-10.0.0.56:22-10.0.0.1:60418.service: Deactivated successfully. Mar 2 13:11:16.216214 systemd[1]: session-28.scope: Deactivated successfully. Mar 2 13:11:16.222544 systemd-logind[1433]: Session 28 logged out. Waiting for processes to exit. Mar 2 13:11:16.231296 systemd-logind[1433]: Removed session 28. Mar 2 13:11:16.287431 sshd[4396]: Accepted publickey for core from 10.0.0.1 port 60424 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:11:16.292137 sshd[4396]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:11:16.308535 systemd-logind[1433]: New session 29 of user core. Mar 2 13:11:16.321209 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 2 13:11:16.547958 sshd[4396]: pam_unix(sshd:session): session closed for user core Mar 2 13:11:16.559579 systemd[1]: sshd@28-10.0.0.56:22-10.0.0.1:60424.service: Deactivated successfully. Mar 2 13:11:16.567548 systemd[1]: session-29.scope: Deactivated successfully. Mar 2 13:11:16.571165 systemd-logind[1433]: Session 29 logged out. Waiting for processes to exit. Mar 2 13:11:16.578560 systemd-logind[1433]: Removed session 29. Mar 2 13:11:21.606054 systemd[1]: Started sshd@29-10.0.0.56:22-10.0.0.1:37092.service - OpenSSH per-connection server daemon (10.0.0.1:37092). Mar 2 13:11:21.658489 sshd[4420]: Accepted publickey for core from 10.0.0.1 port 37092 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:11:21.662267 sshd[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:11:21.681459 systemd-logind[1433]: New session 30 of user core. Mar 2 13:11:21.700950 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 2 13:11:21.910011 sshd[4420]: pam_unix(sshd:session): session closed for user core Mar 2 13:11:21.919079 systemd[1]: sshd@29-10.0.0.56:22-10.0.0.1:37092.service: Deactivated successfully. Mar 2 13:11:21.942046 systemd[1]: session-30.scope: Deactivated successfully. Mar 2 13:11:21.945517 systemd-logind[1433]: Session 30 logged out. Waiting for processes to exit. Mar 2 13:11:21.952190 systemd-logind[1433]: Removed session 30. Mar 2 13:11:26.949453 systemd[1]: Started sshd@30-10.0.0.56:22-10.0.0.1:37100.service - OpenSSH per-connection server daemon (10.0.0.1:37100). Mar 2 13:11:27.043210 sshd[4434]: Accepted publickey for core from 10.0.0.1 port 37100 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:11:27.048160 sshd[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:11:27.065740 systemd-logind[1433]: New session 31 of user core. Mar 2 13:11:27.082319 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 2 13:11:27.334009 sshd[4434]: pam_unix(sshd:session): session closed for user core Mar 2 13:11:27.341163 systemd[1]: sshd@30-10.0.0.56:22-10.0.0.1:37100.service: Deactivated successfully. Mar 2 13:11:27.343614 systemd[1]: session-31.scope: Deactivated successfully. Mar 2 13:11:27.347953 systemd-logind[1433]: Session 31 logged out. Waiting for processes to exit. Mar 2 13:11:27.350750 systemd-logind[1433]: Removed session 31. Mar 2 13:11:30.554432 kubelet[2576]: E0302 13:11:30.549443 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:11:32.369792 systemd[1]: Started sshd@31-10.0.0.56:22-10.0.0.1:48172.service - OpenSSH per-connection server daemon (10.0.0.1:48172). Mar 2 13:11:32.424038 sshd[4451]: Accepted publickey for core from 10.0.0.1 port 48172 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:11:32.429991 sshd[4451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:11:32.441165 systemd-logind[1433]: New session 32 of user core. Mar 2 13:11:32.457490 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 2 13:11:32.681767 sshd[4451]: pam_unix(sshd:session): session closed for user core Mar 2 13:11:32.692348 systemd[1]: sshd@31-10.0.0.56:22-10.0.0.1:48172.service: Deactivated successfully. Mar 2 13:11:32.698327 systemd[1]: session-32.scope: Deactivated successfully. Mar 2 13:11:32.703967 systemd-logind[1433]: Session 32 logged out. Waiting for processes to exit. Mar 2 13:11:32.712539 systemd-logind[1433]: Removed session 32. Mar 2 13:11:34.549495 kubelet[2576]: E0302 13:11:34.549207 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:11:37.702020 systemd[1]: Started sshd@32-10.0.0.56:22-10.0.0.1:48184.service - OpenSSH per-connection server daemon (10.0.0.1:48184). Mar 2 13:11:37.756669 sshd[4467]: Accepted publickey for core from 10.0.0.1 port 48184 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:11:37.758925 sshd[4467]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:11:37.771547 systemd-logind[1433]: New session 33 of user core. Mar 2 13:11:37.776178 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 2 13:11:37.944100 sshd[4467]: pam_unix(sshd:session): session closed for user core Mar 2 13:11:37.960191 systemd[1]: sshd@32-10.0.0.56:22-10.0.0.1:48184.service: Deactivated successfully. Mar 2 13:11:37.963887 systemd[1]: session-33.scope: Deactivated successfully. Mar 2 13:11:37.968963 systemd-logind[1433]: Session 33 logged out. Waiting for processes to exit. Mar 2 13:11:37.980110 systemd[1]: Started sshd@33-10.0.0.56:22-10.0.0.1:48190.service - OpenSSH per-connection server daemon (10.0.0.1:48190). Mar 2 13:11:37.984203 systemd-logind[1433]: Removed session 33. Mar 2 13:11:38.023309 sshd[4481]: Accepted publickey for core from 10.0.0.1 port 48190 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:11:38.026044 sshd[4481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:11:38.035005 systemd-logind[1433]: New session 34 of user core. Mar 2 13:11:38.042112 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 2 13:11:39.532299 containerd[1451]: time="2026-03-02T13:11:39.532243531Z" level=info msg="StopContainer for \"6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c\" with timeout 30 (s)" Mar 2 13:11:39.541108 containerd[1451]: time="2026-03-02T13:11:39.540335766Z" level=info msg="Stop container \"6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c\" with signal terminated" Mar 2 13:11:39.581231 containerd[1451]: time="2026-03-02T13:11:39.581124537Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:11:39.589682 systemd[1]: cri-containerd-6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c.scope: Deactivated successfully. Mar 2 13:11:39.590316 systemd[1]: cri-containerd-6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c.scope: Consumed 2.504s CPU time. Mar 2 13:11:39.606283 containerd[1451]: time="2026-03-02T13:11:39.605105054Z" level=info msg="StopContainer for \"58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb\" with timeout 2 (s)" Mar 2 13:11:39.613486 containerd[1451]: time="2026-03-02T13:11:39.613277941Z" level=info msg="Stop container \"58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb\" with signal terminated" Mar 2 13:11:39.636290 systemd-networkd[1374]: lxc_health: Link DOWN Mar 2 13:11:39.636709 systemd-networkd[1374]: lxc_health: Lost carrier Mar 2 13:11:39.669220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c-rootfs.mount: Deactivated successfully. Mar 2 13:11:39.676174 systemd[1]: cri-containerd-58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb.scope: Deactivated successfully. Mar 2 13:11:39.678420 systemd[1]: cri-containerd-58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb.scope: Consumed 19.367s CPU time. Mar 2 13:11:39.685924 kubelet[2576]: E0302 13:11:39.685720 2576 cadvisor_stats_provider.go:567] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34e43ca1_9eba_4b7f_8715_05a8e14ab597.slice/cri-containerd-58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb.scope\": RecentStats: unable to find data in memory cache]" Mar 2 13:11:39.691658 containerd[1451]: time="2026-03-02T13:11:39.691471768Z" level=info msg="shim disconnected" id=6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c namespace=k8s.io Mar 2 13:11:39.691658 containerd[1451]: time="2026-03-02T13:11:39.691639523Z" level=warning msg="cleaning up after shim disconnected" id=6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c namespace=k8s.io Mar 2 13:11:39.691658 containerd[1451]: time="2026-03-02T13:11:39.691658260Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:11:39.723944 containerd[1451]: time="2026-03-02T13:11:39.723767435Z" level=warning msg="cleanup warnings time=\"2026-03-02T13:11:39Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 2 13:11:39.733106 containerd[1451]: time="2026-03-02T13:11:39.732360393Z" level=info msg="StopContainer for \"6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c\" returns successfully" Mar 2 13:11:39.735726 containerd[1451]: time="2026-03-02T13:11:39.734422055Z" level=info msg="StopPodSandbox for \"8916c348a51f6a15dab4977d69b2f64caceb557d207b3639e40334a82b5fe860\"" Mar 2 13:11:39.735726 containerd[1451]: time="2026-03-02T13:11:39.734502606Z" level=info msg="Container to stop \"6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:11:39.739120 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb-rootfs.mount: Deactivated successfully. Mar 2 13:11:39.739345 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8916c348a51f6a15dab4977d69b2f64caceb557d207b3639e40334a82b5fe860-shm.mount: Deactivated successfully. Mar 2 13:11:39.749446 systemd[1]: cri-containerd-8916c348a51f6a15dab4977d69b2f64caceb557d207b3639e40334a82b5fe860.scope: Deactivated successfully. Mar 2 13:11:39.754618 containerd[1451]: time="2026-03-02T13:11:39.754267705Z" level=info msg="shim disconnected" id=58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb namespace=k8s.io Mar 2 13:11:39.754618 containerd[1451]: time="2026-03-02T13:11:39.754336744Z" level=warning msg="cleaning up after shim disconnected" id=58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb namespace=k8s.io Mar 2 13:11:39.754618 containerd[1451]: time="2026-03-02T13:11:39.754351914Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:11:39.795771 containerd[1451]: time="2026-03-02T13:11:39.795424300Z" level=info msg="StopContainer for \"58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb\" returns successfully" Mar 2 13:11:39.798342 containerd[1451]: time="2026-03-02T13:11:39.798240339Z" level=info msg="StopPodSandbox for \"8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb\"" Mar 2 13:11:39.798494 containerd[1451]: time="2026-03-02T13:11:39.798335777Z" level=info msg="Container to stop \"3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:11:39.798494 containerd[1451]: time="2026-03-02T13:11:39.798364682Z" level=info msg="Container to stop \"58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:11:39.798494 containerd[1451]: time="2026-03-02T13:11:39.798381604Z" level=info msg="Container to stop \"b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:11:39.798494 containerd[1451]: time="2026-03-02T13:11:39.798401651Z" level=info msg="Container to stop \"451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:11:39.798494 containerd[1451]: time="2026-03-02T13:11:39.798418613Z" level=info msg="Container to stop \"dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:11:39.803702 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb-shm.mount: Deactivated successfully. Mar 2 13:11:39.811446 systemd[1]: cri-containerd-8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb.scope: Deactivated successfully. Mar 2 13:11:39.818748 containerd[1451]: time="2026-03-02T13:11:39.818645378Z" level=info msg="shim disconnected" id=8916c348a51f6a15dab4977d69b2f64caceb557d207b3639e40334a82b5fe860 namespace=k8s.io Mar 2 13:11:39.818748 containerd[1451]: time="2026-03-02T13:11:39.818711843Z" level=warning msg="cleaning up after shim disconnected" id=8916c348a51f6a15dab4977d69b2f64caceb557d207b3639e40334a82b5fe860 namespace=k8s.io Mar 2 13:11:39.818748 containerd[1451]: time="2026-03-02T13:11:39.818724858Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:11:39.855590 containerd[1451]: time="2026-03-02T13:11:39.855240845Z" level=info msg="TearDown network for sandbox \"8916c348a51f6a15dab4977d69b2f64caceb557d207b3639e40334a82b5fe860\" successfully" Mar 2 13:11:39.855590 containerd[1451]: time="2026-03-02T13:11:39.855303073Z" level=info msg="StopPodSandbox for \"8916c348a51f6a15dab4977d69b2f64caceb557d207b3639e40334a82b5fe860\" returns successfully" Mar 2 13:11:39.881902 containerd[1451]: time="2026-03-02T13:11:39.881753710Z" level=info msg="shim disconnected" id=8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb namespace=k8s.io Mar 2 13:11:39.881902 containerd[1451]: time="2026-03-02T13:11:39.881811709Z" level=warning msg="cleaning up after shim disconnected" id=8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb namespace=k8s.io Mar 2 13:11:39.881902 containerd[1451]: time="2026-03-02T13:11:39.881883414Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:11:39.931405 containerd[1451]: time="2026-03-02T13:11:39.931242019Z" level=info msg="TearDown network for sandbox \"8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb\" successfully" Mar 2 13:11:39.931405 containerd[1451]: time="2026-03-02T13:11:39.931323822Z" level=info msg="StopPodSandbox for \"8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb\" returns successfully" Mar 2 13:11:39.956458 kubelet[2576]: I0302 13:11:39.956390 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjdpn\" (UniqueName: \"kubernetes.io/projected/c41f0d9f-e90d-42b4-8326-304f55aa778f-kube-api-access-kjdpn\") pod \"c41f0d9f-e90d-42b4-8326-304f55aa778f\" (UID: \"c41f0d9f-e90d-42b4-8326-304f55aa778f\") " Mar 2 13:11:39.956458 kubelet[2576]: I0302 13:11:39.956456 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c41f0d9f-e90d-42b4-8326-304f55aa778f-cilium-config-path\") pod \"c41f0d9f-e90d-42b4-8326-304f55aa778f\" (UID: \"c41f0d9f-e90d-42b4-8326-304f55aa778f\") " Mar 2 13:11:39.963092 kubelet[2576]: I0302 13:11:39.963010 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c41f0d9f-e90d-42b4-8326-304f55aa778f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c41f0d9f-e90d-42b4-8326-304f55aa778f" (UID: "c41f0d9f-e90d-42b4-8326-304f55aa778f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:11:39.967141 kubelet[2576]: I0302 13:11:39.967060 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c41f0d9f-e90d-42b4-8326-304f55aa778f-kube-api-access-kjdpn" (OuterVolumeSpecName: "kube-api-access-kjdpn") pod "c41f0d9f-e90d-42b4-8326-304f55aa778f" (UID: "c41f0d9f-e90d-42b4-8326-304f55aa778f"). InnerVolumeSpecName "kube-api-access-kjdpn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:11:39.989002 kubelet[2576]: I0302 13:11:39.988312 2576 scope.go:117] "RemoveContainer" containerID="6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c" Mar 2 13:11:39.997395 containerd[1451]: time="2026-03-02T13:11:39.997072306Z" level=info msg="RemoveContainer for \"6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c\"" Mar 2 13:11:40.008468 systemd[1]: Removed slice kubepods-besteffort-podc41f0d9f_e90d_42b4_8326_304f55aa778f.slice - libcontainer container kubepods-besteffort-podc41f0d9f_e90d_42b4_8326_304f55aa778f.slice. Mar 2 13:11:40.008669 systemd[1]: kubepods-besteffort-podc41f0d9f_e90d_42b4_8326_304f55aa778f.slice: Consumed 2.612s CPU time. Mar 2 13:11:40.012209 containerd[1451]: time="2026-03-02T13:11:40.012112863Z" level=info msg="RemoveContainer for \"6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c\" returns successfully" Mar 2 13:11:40.012697 kubelet[2576]: I0302 13:11:40.012475 2576 scope.go:117] "RemoveContainer" containerID="6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c" Mar 2 13:11:40.013292 containerd[1451]: time="2026-03-02T13:11:40.013205098Z" level=error msg="ContainerStatus for \"6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c\": not found" Mar 2 13:11:40.013811 kubelet[2576]: E0302 13:11:40.013687 2576 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c\": not found" containerID="6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c" Mar 2 13:11:40.013811 kubelet[2576]: I0302 13:11:40.013758 2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c"} err="failed to get container status \"6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c\": rpc error: code = NotFound desc = an error occurred when try to find container \"6efd8796d618511bc69b6344368ecf6055a7f0fea8ae8af5a7a5484c99959f9c\": not found" Mar 2 13:11:40.013811 kubelet[2576]: I0302 13:11:40.013807 2576 scope.go:117] "RemoveContainer" containerID="58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb" Mar 2 13:11:40.017093 containerd[1451]: time="2026-03-02T13:11:40.017047443Z" level=info msg="RemoveContainer for \"58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb\"" Mar 2 13:11:40.024621 containerd[1451]: time="2026-03-02T13:11:40.024182526Z" level=info msg="RemoveContainer for \"58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb\" returns successfully" Mar 2 13:11:40.024763 kubelet[2576]: I0302 13:11:40.024702 2576 scope.go:117] "RemoveContainer" containerID="3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143" Mar 2 13:11:40.029213 containerd[1451]: time="2026-03-02T13:11:40.028649852Z" level=info msg="RemoveContainer for \"3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143\"" Mar 2 13:11:40.039148 containerd[1451]: time="2026-03-02T13:11:40.038988493Z" level=info msg="RemoveContainer for \"3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143\" returns successfully" Mar 2 13:11:40.039881 kubelet[2576]: I0302 13:11:40.039741 2576 scope.go:117] "RemoveContainer" containerID="dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956" Mar 2 13:11:40.042077 containerd[1451]: time="2026-03-02T13:11:40.042014436Z" level=info msg="RemoveContainer for \"dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956\"" Mar 2 13:11:40.046998 containerd[1451]: time="2026-03-02T13:11:40.046771456Z" level=info msg="RemoveContainer for \"dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956\" returns successfully" Mar 2 13:11:40.047311 kubelet[2576]: I0302 13:11:40.047190 2576 scope.go:117] "RemoveContainer" containerID="451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c" Mar 2 13:11:40.050635 containerd[1451]: time="2026-03-02T13:11:40.050487898Z" level=info msg="RemoveContainer for \"451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c\"" Mar 2 13:11:40.056239 containerd[1451]: time="2026-03-02T13:11:40.056077645Z" level=info msg="RemoveContainer for \"451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c\" returns successfully" Mar 2 13:11:40.060308 kubelet[2576]: I0302 13:11:40.056626 2576 scope.go:117] "RemoveContainer" containerID="b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2" Mar 2 13:11:40.060308 kubelet[2576]: I0302 13:11:40.056805 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-cni-path\") pod \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " Mar 2 13:11:40.060308 kubelet[2576]: I0302 13:11:40.057216 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-bpf-maps\") pod \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " Mar 2 13:11:40.060308 kubelet[2576]: I0302 13:11:40.057259 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-host-proc-sys-kernel\") pod \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " Mar 2 13:11:40.060308 kubelet[2576]: I0302 13:11:40.057284 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-hostproc\") pod \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " Mar 2 13:11:40.060308 kubelet[2576]: I0302 13:11:40.057306 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-cilium-run\") pod \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " Mar 2 13:11:40.060308 kubelet[2576]: I0302 13:11:40.057337 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34e43ca1-9eba-4b7f-8715-05a8e14ab597-cilium-config-path\") pod \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " Mar 2 13:11:40.060675 kubelet[2576]: I0302 13:11:40.057361 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-host-proc-sys-net\") pod \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " Mar 2 13:11:40.060675 kubelet[2576]: I0302 13:11:40.057387 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-etc-cni-netd\") pod \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " Mar 2 13:11:40.060675 kubelet[2576]: I0302 13:11:40.058085 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-hostproc" (OuterVolumeSpecName: "hostproc") pod "34e43ca1-9eba-4b7f-8715-05a8e14ab597" (UID: "34e43ca1-9eba-4b7f-8715-05a8e14ab597"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:11:40.060675 kubelet[2576]: I0302 13:11:40.058127 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-cni-path" (OuterVolumeSpecName: "cni-path") pod "34e43ca1-9eba-4b7f-8715-05a8e14ab597" (UID: "34e43ca1-9eba-4b7f-8715-05a8e14ab597"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:11:40.060675 kubelet[2576]: I0302 13:11:40.058154 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "34e43ca1-9eba-4b7f-8715-05a8e14ab597" (UID: "34e43ca1-9eba-4b7f-8715-05a8e14ab597"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:11:40.061066 kubelet[2576]: I0302 13:11:40.058173 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "34e43ca1-9eba-4b7f-8715-05a8e14ab597" (UID: "34e43ca1-9eba-4b7f-8715-05a8e14ab597"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:11:40.061276 kubelet[2576]: I0302 13:11:40.057415 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34e43ca1-9eba-4b7f-8715-05a8e14ab597-clustermesh-secrets\") pod \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " Mar 2 13:11:40.061276 kubelet[2576]: I0302 13:11:40.061233 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34e43ca1-9eba-4b7f-8715-05a8e14ab597-hubble-tls\") pod \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " Mar 2 13:11:40.061276 kubelet[2576]: I0302 13:11:40.061262 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-lib-modules\") pod \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " Mar 2 13:11:40.061491 kubelet[2576]: I0302 13:11:40.061286 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-cilium-cgroup\") pod \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " Mar 2 13:11:40.061491 kubelet[2576]: I0302 13:11:40.061316 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6nwz5\" (UniqueName: \"kubernetes.io/projected/34e43ca1-9eba-4b7f-8715-05a8e14ab597-kube-api-access-6nwz5\") pod \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " Mar 2 13:11:40.061491 kubelet[2576]: I0302 13:11:40.061339 2576 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-xtables-lock\") pod \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\" (UID: \"34e43ca1-9eba-4b7f-8715-05a8e14ab597\") " Mar 2 13:11:40.061491 kubelet[2576]: I0302 13:11:40.061397 2576 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 2 13:11:40.061491 kubelet[2576]: I0302 13:11:40.061412 2576 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 2 13:11:40.061491 kubelet[2576]: I0302 13:11:40.061425 2576 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kjdpn\" (UniqueName: \"kubernetes.io/projected/c41f0d9f-e90d-42b4-8326-304f55aa778f-kube-api-access-kjdpn\") on node \"localhost\" DevicePath \"\"" Mar 2 13:11:40.061491 kubelet[2576]: I0302 13:11:40.061445 2576 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 2 13:11:40.061797 kubelet[2576]: I0302 13:11:40.061457 2576 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 2 13:11:40.061797 kubelet[2576]: I0302 13:11:40.061470 2576 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c41f0d9f-e90d-42b4-8326-304f55aa778f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 2 13:11:40.061797 kubelet[2576]: I0302 13:11:40.061505 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "34e43ca1-9eba-4b7f-8715-05a8e14ab597" (UID: "34e43ca1-9eba-4b7f-8715-05a8e14ab597"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:11:40.061797 kubelet[2576]: I0302 13:11:40.061600 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "34e43ca1-9eba-4b7f-8715-05a8e14ab597" (UID: "34e43ca1-9eba-4b7f-8715-05a8e14ab597"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:11:40.061797 kubelet[2576]: I0302 13:11:40.061630 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "34e43ca1-9eba-4b7f-8715-05a8e14ab597" (UID: "34e43ca1-9eba-4b7f-8715-05a8e14ab597"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:11:40.062805 containerd[1451]: time="2026-03-02T13:11:40.062707586Z" level=info msg="RemoveContainer for \"b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2\"" Mar 2 13:11:40.063182 kubelet[2576]: I0302 13:11:40.063047 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "34e43ca1-9eba-4b7f-8715-05a8e14ab597" (UID: "34e43ca1-9eba-4b7f-8715-05a8e14ab597"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:11:40.063481 kubelet[2576]: I0302 13:11:40.063282 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "34e43ca1-9eba-4b7f-8715-05a8e14ab597" (UID: "34e43ca1-9eba-4b7f-8715-05a8e14ab597"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:11:40.063481 kubelet[2576]: I0302 13:11:40.063309 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "34e43ca1-9eba-4b7f-8715-05a8e14ab597" (UID: "34e43ca1-9eba-4b7f-8715-05a8e14ab597"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:11:40.065913 kubelet[2576]: I0302 13:11:40.065662 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34e43ca1-9eba-4b7f-8715-05a8e14ab597-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "34e43ca1-9eba-4b7f-8715-05a8e14ab597" (UID: "34e43ca1-9eba-4b7f-8715-05a8e14ab597"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:11:40.068409 kubelet[2576]: I0302 13:11:40.068202 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34e43ca1-9eba-4b7f-8715-05a8e14ab597-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "34e43ca1-9eba-4b7f-8715-05a8e14ab597" (UID: "34e43ca1-9eba-4b7f-8715-05a8e14ab597"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 2 13:11:40.068916 kubelet[2576]: I0302 13:11:40.068411 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e43ca1-9eba-4b7f-8715-05a8e14ab597-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "34e43ca1-9eba-4b7f-8715-05a8e14ab597" (UID: "34e43ca1-9eba-4b7f-8715-05a8e14ab597"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:11:40.071239 kubelet[2576]: I0302 13:11:40.071132 2576 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34e43ca1-9eba-4b7f-8715-05a8e14ab597-kube-api-access-6nwz5" (OuterVolumeSpecName: "kube-api-access-6nwz5") pod "34e43ca1-9eba-4b7f-8715-05a8e14ab597" (UID: "34e43ca1-9eba-4b7f-8715-05a8e14ab597"). InnerVolumeSpecName "kube-api-access-6nwz5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:11:40.072565 containerd[1451]: time="2026-03-02T13:11:40.072386033Z" level=info msg="RemoveContainer for \"b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2\" returns successfully" Mar 2 13:11:40.073126 kubelet[2576]: I0302 13:11:40.073046 2576 scope.go:117] "RemoveContainer" containerID="58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb" Mar 2 13:11:40.073449 containerd[1451]: time="2026-03-02T13:11:40.073357855Z" level=error msg="ContainerStatus for \"58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb\": not found" Mar 2 13:11:40.074077 kubelet[2576]: E0302 13:11:40.073660 2576 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb\": not found" containerID="58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb" Mar 2 13:11:40.074077 kubelet[2576]: I0302 13:11:40.073727 2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb"} err="failed to get container status \"58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb\": rpc error: code = NotFound desc = an error occurred when try to find container \"58e53f4002cabd3af7710526c83dd121d7510b74e07f6d63bd5f2c021d31e1fb\": not found" Mar 2 13:11:40.074077 kubelet[2576]: I0302 13:11:40.073753 2576 scope.go:117] "RemoveContainer" containerID="3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143" Mar 2 13:11:40.074310 containerd[1451]: time="2026-03-02T13:11:40.074240241Z" level=error msg="ContainerStatus for \"3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143\": not found" Mar 2 13:11:40.074800 kubelet[2576]: E0302 13:11:40.074682 2576 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143\": not found" containerID="3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143" Mar 2 13:11:40.074800 kubelet[2576]: I0302 13:11:40.074746 2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143"} err="failed to get container status \"3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143\": rpc error: code = NotFound desc = an error occurred when try to find container \"3a827fc2f63733ec49e3a51440e7fcbb5bf49b4c96bb16be47451d87c4b1b143\": not found" Mar 2 13:11:40.074800 kubelet[2576]: I0302 13:11:40.074769 2576 scope.go:117] "RemoveContainer" containerID="dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956" Mar 2 13:11:40.075467 containerd[1451]: time="2026-03-02T13:11:40.075361976Z" level=error msg="ContainerStatus for \"dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956\": not found" Mar 2 13:11:40.075689 kubelet[2576]: E0302 13:11:40.075653 2576 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956\": not found" containerID="dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956" Mar 2 13:11:40.075759 kubelet[2576]: I0302 13:11:40.075683 2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956"} err="failed to get container status \"dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd485b676705018971399cb44c1049d5be4b67c0c1ad23a020d026f726790956\": not found" Mar 2 13:11:40.075759 kubelet[2576]: I0302 13:11:40.075701 2576 scope.go:117] "RemoveContainer" containerID="451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c" Mar 2 13:11:40.076245 containerd[1451]: time="2026-03-02T13:11:40.076141062Z" level=error msg="ContainerStatus for \"451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c\": not found" Mar 2 13:11:40.076697 kubelet[2576]: E0302 13:11:40.076576 2576 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c\": not found" containerID="451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c" Mar 2 13:11:40.076697 kubelet[2576]: I0302 13:11:40.076651 2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c"} err="failed to get container status \"451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c\": rpc error: code = NotFound desc = an error occurred when try to find container \"451cc88d840427ea4a3ab92510738da2628d409275dbe81d4c213af28ad6493c\": not found" Mar 2 13:11:40.076697 kubelet[2576]: I0302 13:11:40.076691 2576 scope.go:117] "RemoveContainer" containerID="b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2" Mar 2 13:11:40.077297 containerd[1451]: time="2026-03-02T13:11:40.077269941Z" level=error msg="ContainerStatus for \"b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2\": not found" Mar 2 13:11:40.077476 kubelet[2576]: E0302 13:11:40.077457 2576 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2\": not found" containerID="b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2" Mar 2 13:11:40.077586 kubelet[2576]: I0302 13:11:40.077484 2576 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2"} err="failed to get container status \"b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2\": rpc error: code = NotFound desc = an error occurred when try to find container \"b241d414e8fe5af76398148485b066944b2aff4afb3e2f5da608eb18d79eb9d2\": not found" Mar 2 13:11:40.163030 kubelet[2576]: I0302 13:11:40.162973 2576 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/34e43ca1-9eba-4b7f-8715-05a8e14ab597-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 2 13:11:40.163030 kubelet[2576]: I0302 13:11:40.163023 2576 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 2 13:11:40.163030 kubelet[2576]: I0302 13:11:40.163040 2576 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 2 13:11:40.163321 kubelet[2576]: I0302 13:11:40.163052 2576 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6nwz5\" (UniqueName: \"kubernetes.io/projected/34e43ca1-9eba-4b7f-8715-05a8e14ab597-kube-api-access-6nwz5\") on node \"localhost\" DevicePath \"\"" Mar 2 13:11:40.163321 kubelet[2576]: I0302 13:11:40.163066 2576 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 2 13:11:40.163321 kubelet[2576]: I0302 13:11:40.163078 2576 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 2 13:11:40.163321 kubelet[2576]: I0302 13:11:40.163089 2576 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/34e43ca1-9eba-4b7f-8715-05a8e14ab597-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 2 13:11:40.163321 kubelet[2576]: I0302 13:11:40.163100 2576 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 2 13:11:40.163321 kubelet[2576]: I0302 13:11:40.163115 2576 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/34e43ca1-9eba-4b7f-8715-05a8e14ab597-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 2 13:11:40.163321 kubelet[2576]: I0302 13:11:40.163126 2576 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/34e43ca1-9eba-4b7f-8715-05a8e14ab597-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 2 13:11:40.316439 systemd[1]: Removed slice kubepods-burstable-pod34e43ca1_9eba_4b7f_8715_05a8e14ab597.slice - libcontainer container kubepods-burstable-pod34e43ca1_9eba_4b7f_8715_05a8e14ab597.slice. Mar 2 13:11:40.316624 systemd[1]: kubepods-burstable-pod34e43ca1_9eba_4b7f_8715_05a8e14ab597.slice: Consumed 19.600s CPU time. Mar 2 13:11:40.534378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8916c348a51f6a15dab4977d69b2f64caceb557d207b3639e40334a82b5fe860-rootfs.mount: Deactivated successfully. Mar 2 13:11:40.534581 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8039c0f377e4323e6c376f5b2bb9261913c2d71612d24b73d774766b1d6d6acb-rootfs.mount: Deactivated successfully. Mar 2 13:11:40.534661 systemd[1]: var-lib-kubelet-pods-c41f0d9f\x2de90d\x2d42b4\x2d8326\x2d304f55aa778f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkjdpn.mount: Deactivated successfully. Mar 2 13:11:40.534739 systemd[1]: var-lib-kubelet-pods-34e43ca1\x2d9eba\x2d4b7f\x2d8715\x2d05a8e14ab597-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6nwz5.mount: Deactivated successfully. Mar 2 13:11:40.534814 systemd[1]: var-lib-kubelet-pods-34e43ca1\x2d9eba\x2d4b7f\x2d8715\x2d05a8e14ab597-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 2 13:11:40.534961 systemd[1]: var-lib-kubelet-pods-34e43ca1\x2d9eba\x2d4b7f\x2d8715\x2d05a8e14ab597-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 2 13:11:40.551746 kubelet[2576]: I0302 13:11:40.551462 2576 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34e43ca1-9eba-4b7f-8715-05a8e14ab597" path="/var/lib/kubelet/pods/34e43ca1-9eba-4b7f-8715-05a8e14ab597/volumes" Mar 2 13:11:40.552991 kubelet[2576]: I0302 13:11:40.552810 2576 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c41f0d9f-e90d-42b4-8326-304f55aa778f" path="/var/lib/kubelet/pods/c41f0d9f-e90d-42b4-8326-304f55aa778f/volumes" Mar 2 13:11:41.389092 sshd[4481]: pam_unix(sshd:session): session closed for user core Mar 2 13:11:41.406460 systemd[1]: sshd@33-10.0.0.56:22-10.0.0.1:48190.service: Deactivated successfully. Mar 2 13:11:41.409381 systemd[1]: session-34.scope: Deactivated successfully. Mar 2 13:11:41.412746 systemd-logind[1433]: Session 34 logged out. Waiting for processes to exit. Mar 2 13:11:41.421409 systemd[1]: Started sshd@34-10.0.0.56:22-10.0.0.1:34254.service - OpenSSH per-connection server daemon (10.0.0.1:34254). Mar 2 13:11:41.429144 systemd-logind[1433]: Removed session 34. Mar 2 13:11:41.476755 sshd[4642]: Accepted publickey for core from 10.0.0.1 port 34254 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:11:41.479462 sshd[4642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:11:41.488225 systemd-logind[1433]: New session 35 of user core. Mar 2 13:11:41.502271 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 2 13:11:42.383073 kubelet[2576]: E0302 13:11:42.382758 2576 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:11:42.553129 kubelet[2576]: E0302 13:11:42.552966 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:11:42.624520 sshd[4642]: pam_unix(sshd:session): session closed for user core Mar 2 13:11:42.642113 systemd[1]: sshd@34-10.0.0.56:22-10.0.0.1:34254.service: Deactivated successfully. Mar 2 13:11:42.645967 systemd[1]: session-35.scope: Deactivated successfully. Mar 2 13:11:42.652206 systemd-logind[1433]: Session 35 logged out. Waiting for processes to exit. Mar 2 13:11:42.664127 systemd[1]: Started sshd@35-10.0.0.56:22-10.0.0.1:34264.service - OpenSSH per-connection server daemon (10.0.0.1:34264). Mar 2 13:11:42.672243 systemd-logind[1433]: Removed session 35. Mar 2 13:11:42.712368 systemd[1]: Created slice kubepods-burstable-poda6fa261c_7c16_4139_afeb_9d90e42dcbeb.slice - libcontainer container kubepods-burstable-poda6fa261c_7c16_4139_afeb_9d90e42dcbeb.slice. Mar 2 13:11:42.730025 sshd[4655]: Accepted publickey for core from 10.0.0.1 port 34264 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:11:42.734653 sshd[4655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:11:42.752071 systemd-logind[1433]: New session 36 of user core. Mar 2 13:11:42.764511 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 2 13:11:42.795872 kubelet[2576]: I0302 13:11:42.795640 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6fa261c-7c16-4139-afeb-9d90e42dcbeb-lib-modules\") pod \"cilium-vzjh4\" (UID: \"a6fa261c-7c16-4139-afeb-9d90e42dcbeb\") " pod="kube-system/cilium-vzjh4" Mar 2 13:11:42.795872 kubelet[2576]: I0302 13:11:42.795739 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6fa261c-7c16-4139-afeb-9d90e42dcbeb-cilium-config-path\") pod \"cilium-vzjh4\" (UID: \"a6fa261c-7c16-4139-afeb-9d90e42dcbeb\") " pod="kube-system/cilium-vzjh4" Mar 2 13:11:42.795872 kubelet[2576]: I0302 13:11:42.795770 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a6fa261c-7c16-4139-afeb-9d90e42dcbeb-hubble-tls\") pod \"cilium-vzjh4\" (UID: \"a6fa261c-7c16-4139-afeb-9d90e42dcbeb\") " pod="kube-system/cilium-vzjh4" Mar 2 13:11:42.795872 kubelet[2576]: I0302 13:11:42.795796 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a6fa261c-7c16-4139-afeb-9d90e42dcbeb-cilium-run\") pod \"cilium-vzjh4\" (UID: \"a6fa261c-7c16-4139-afeb-9d90e42dcbeb\") " pod="kube-system/cilium-vzjh4" Mar 2 13:11:42.797109 kubelet[2576]: I0302 13:11:42.795917 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a6fa261c-7c16-4139-afeb-9d90e42dcbeb-bpf-maps\") pod \"cilium-vzjh4\" (UID: \"a6fa261c-7c16-4139-afeb-9d90e42dcbeb\") " pod="kube-system/cilium-vzjh4" Mar 2 13:11:42.797109 kubelet[2576]: I0302 13:11:42.795988 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a6fa261c-7c16-4139-afeb-9d90e42dcbeb-hostproc\") pod \"cilium-vzjh4\" (UID: \"a6fa261c-7c16-4139-afeb-9d90e42dcbeb\") " pod="kube-system/cilium-vzjh4" Mar 2 13:11:42.797109 kubelet[2576]: I0302 13:11:42.796009 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a6fa261c-7c16-4139-afeb-9d90e42dcbeb-cilium-cgroup\") pod \"cilium-vzjh4\" (UID: \"a6fa261c-7c16-4139-afeb-9d90e42dcbeb\") " pod="kube-system/cilium-vzjh4" Mar 2 13:11:42.797109 kubelet[2576]: I0302 13:11:42.796030 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6fa261c-7c16-4139-afeb-9d90e42dcbeb-xtables-lock\") pod \"cilium-vzjh4\" (UID: \"a6fa261c-7c16-4139-afeb-9d90e42dcbeb\") " pod="kube-system/cilium-vzjh4" Mar 2 13:11:42.797109 kubelet[2576]: I0302 13:11:42.796049 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a6fa261c-7c16-4139-afeb-9d90e42dcbeb-clustermesh-secrets\") pod \"cilium-vzjh4\" (UID: \"a6fa261c-7c16-4139-afeb-9d90e42dcbeb\") " pod="kube-system/cilium-vzjh4" Mar 2 13:11:42.797109 kubelet[2576]: I0302 13:11:42.796068 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a6fa261c-7c16-4139-afeb-9d90e42dcbeb-cilium-ipsec-secrets\") pod \"cilium-vzjh4\" (UID: \"a6fa261c-7c16-4139-afeb-9d90e42dcbeb\") " pod="kube-system/cilium-vzjh4" Mar 2 13:11:42.797335 kubelet[2576]: I0302 13:11:42.796097 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a6fa261c-7c16-4139-afeb-9d90e42dcbeb-host-proc-sys-kernel\") pod \"cilium-vzjh4\" (UID: \"a6fa261c-7c16-4139-afeb-9d90e42dcbeb\") " pod="kube-system/cilium-vzjh4" Mar 2 13:11:42.797335 kubelet[2576]: I0302 13:11:42.796131 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzccf\" (UniqueName: \"kubernetes.io/projected/a6fa261c-7c16-4139-afeb-9d90e42dcbeb-kube-api-access-dzccf\") pod \"cilium-vzjh4\" (UID: \"a6fa261c-7c16-4139-afeb-9d90e42dcbeb\") " pod="kube-system/cilium-vzjh4" Mar 2 13:11:42.797335 kubelet[2576]: I0302 13:11:42.796184 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a6fa261c-7c16-4139-afeb-9d90e42dcbeb-cni-path\") pod \"cilium-vzjh4\" (UID: \"a6fa261c-7c16-4139-afeb-9d90e42dcbeb\") " pod="kube-system/cilium-vzjh4" Mar 2 13:11:42.797335 kubelet[2576]: I0302 13:11:42.796225 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6fa261c-7c16-4139-afeb-9d90e42dcbeb-etc-cni-netd\") pod \"cilium-vzjh4\" (UID: \"a6fa261c-7c16-4139-afeb-9d90e42dcbeb\") " pod="kube-system/cilium-vzjh4" Mar 2 13:11:42.797335 kubelet[2576]: I0302 13:11:42.796246 2576 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a6fa261c-7c16-4139-afeb-9d90e42dcbeb-host-proc-sys-net\") pod \"cilium-vzjh4\" (UID: \"a6fa261c-7c16-4139-afeb-9d90e42dcbeb\") " pod="kube-system/cilium-vzjh4" Mar 2 13:11:42.840973 sshd[4655]: pam_unix(sshd:session): session closed for user core Mar 2 13:11:42.858147 systemd[1]: sshd@35-10.0.0.56:22-10.0.0.1:34264.service: Deactivated successfully. Mar 2 13:11:42.861789 systemd[1]: session-36.scope: Deactivated successfully. Mar 2 13:11:42.867893 systemd-logind[1433]: Session 36 logged out. Waiting for processes to exit. Mar 2 13:11:42.879479 systemd[1]: Started sshd@36-10.0.0.56:22-10.0.0.1:34266.service - OpenSSH per-connection server daemon (10.0.0.1:34266). Mar 2 13:11:42.881998 systemd-logind[1433]: Removed session 36. Mar 2 13:11:42.955975 sshd[4663]: Accepted publickey for core from 10.0.0.1 port 34266 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:11:42.958446 sshd[4663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:11:42.972327 systemd-logind[1433]: New session 37 of user core. Mar 2 13:11:42.987233 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 2 13:11:43.024348 kubelet[2576]: E0302 13:11:43.024199 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:11:43.026646 containerd[1451]: time="2026-03-02T13:11:43.025411659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vzjh4,Uid:a6fa261c-7c16-4139-afeb-9d90e42dcbeb,Namespace:kube-system,Attempt:0,}" Mar 2 13:11:43.098786 containerd[1451]: time="2026-03-02T13:11:43.098067349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:11:43.098786 containerd[1451]: time="2026-03-02T13:11:43.098203525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:11:43.098786 containerd[1451]: time="2026-03-02T13:11:43.098321367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:11:43.099106 containerd[1451]: time="2026-03-02T13:11:43.099062171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:11:43.143222 systemd[1]: Started cri-containerd-1dbf9fb1368660eb9ffba79dbc577f665db5d309453e2dd4b6295e4178c76a7a.scope - libcontainer container 1dbf9fb1368660eb9ffba79dbc577f665db5d309453e2dd4b6295e4178c76a7a. Mar 2 13:11:43.198648 containerd[1451]: time="2026-03-02T13:11:43.197699506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vzjh4,Uid:a6fa261c-7c16-4139-afeb-9d90e42dcbeb,Namespace:kube-system,Attempt:0,} returns sandbox id \"1dbf9fb1368660eb9ffba79dbc577f665db5d309453e2dd4b6295e4178c76a7a\"" Mar 2 13:11:43.199406 kubelet[2576]: E0302 13:11:43.199120 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:11:43.210711 containerd[1451]: time="2026-03-02T13:11:43.210478017Z" level=info msg="CreateContainer within sandbox \"1dbf9fb1368660eb9ffba79dbc577f665db5d309453e2dd4b6295e4178c76a7a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 2 13:11:43.246051 containerd[1451]: time="2026-03-02T13:11:43.245789109Z" level=info msg="CreateContainer within sandbox \"1dbf9fb1368660eb9ffba79dbc577f665db5d309453e2dd4b6295e4178c76a7a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a7a69dd1049e0a77d2b0eba65c805295e8cbba4d76e244d83e1512692780ed20\"" Mar 2 13:11:43.248304 containerd[1451]: time="2026-03-02T13:11:43.246914002Z" level=info msg="StartContainer for \"a7a69dd1049e0a77d2b0eba65c805295e8cbba4d76e244d83e1512692780ed20\"" Mar 2 13:11:43.310268 systemd[1]: Started cri-containerd-a7a69dd1049e0a77d2b0eba65c805295e8cbba4d76e244d83e1512692780ed20.scope - libcontainer container a7a69dd1049e0a77d2b0eba65c805295e8cbba4d76e244d83e1512692780ed20. Mar 2 13:11:43.364073 containerd[1451]: time="2026-03-02T13:11:43.363768987Z" level=info msg="StartContainer for \"a7a69dd1049e0a77d2b0eba65c805295e8cbba4d76e244d83e1512692780ed20\" returns successfully" Mar 2 13:11:43.396987 systemd[1]: cri-containerd-a7a69dd1049e0a77d2b0eba65c805295e8cbba4d76e244d83e1512692780ed20.scope: Deactivated successfully. Mar 2 13:11:43.496736 containerd[1451]: time="2026-03-02T13:11:43.496391387Z" level=info msg="shim disconnected" id=a7a69dd1049e0a77d2b0eba65c805295e8cbba4d76e244d83e1512692780ed20 namespace=k8s.io Mar 2 13:11:43.496736 containerd[1451]: time="2026-03-02T13:11:43.496497837Z" level=warning msg="cleaning up after shim disconnected" id=a7a69dd1049e0a77d2b0eba65c805295e8cbba4d76e244d83e1512692780ed20 namespace=k8s.io Mar 2 13:11:43.496736 containerd[1451]: time="2026-03-02T13:11:43.496512525Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:11:43.551119 containerd[1451]: time="2026-03-02T13:11:43.550407717Z" level=warning msg="cleanup warnings time=\"2026-03-02T13:11:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 2 13:11:44.026643 kubelet[2576]: E0302 13:11:44.025544 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:11:44.040451 containerd[1451]: time="2026-03-02T13:11:44.038414267Z" level=info msg="CreateContainer within sandbox \"1dbf9fb1368660eb9ffba79dbc577f665db5d309453e2dd4b6295e4178c76a7a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 2 13:11:44.082196 containerd[1451]: time="2026-03-02T13:11:44.082001106Z" level=info msg="CreateContainer within sandbox \"1dbf9fb1368660eb9ffba79dbc577f665db5d309453e2dd4b6295e4178c76a7a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b75803adfa3e2f231a942e7481d1c9c2bea45c25ab0ec03cca1b4005cc098deb\"" Mar 2 13:11:44.091907 containerd[1451]: time="2026-03-02T13:11:44.088502879Z" level=info msg="StartContainer for \"b75803adfa3e2f231a942e7481d1c9c2bea45c25ab0ec03cca1b4005cc098deb\"" Mar 2 13:11:44.203262 systemd[1]: Started cri-containerd-b75803adfa3e2f231a942e7481d1c9c2bea45c25ab0ec03cca1b4005cc098deb.scope - libcontainer container b75803adfa3e2f231a942e7481d1c9c2bea45c25ab0ec03cca1b4005cc098deb. Mar 2 13:11:44.274116 containerd[1451]: time="2026-03-02T13:11:44.274018880Z" level=info msg="StartContainer for \"b75803adfa3e2f231a942e7481d1c9c2bea45c25ab0ec03cca1b4005cc098deb\" returns successfully" Mar 2 13:11:44.303005 systemd[1]: cri-containerd-b75803adfa3e2f231a942e7481d1c9c2bea45c25ab0ec03cca1b4005cc098deb.scope: Deactivated successfully. Mar 2 13:11:44.378977 containerd[1451]: time="2026-03-02T13:11:44.378898657Z" level=info msg="shim disconnected" id=b75803adfa3e2f231a942e7481d1c9c2bea45c25ab0ec03cca1b4005cc098deb namespace=k8s.io Mar 2 13:11:44.379445 containerd[1451]: time="2026-03-02T13:11:44.379312505Z" level=warning msg="cleaning up after shim disconnected" id=b75803adfa3e2f231a942e7481d1c9c2bea45c25ab0ec03cca1b4005cc098deb namespace=k8s.io Mar 2 13:11:44.379445 containerd[1451]: time="2026-03-02T13:11:44.379379962Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:11:44.420743 containerd[1451]: time="2026-03-02T13:11:44.419702665Z" level=warning msg="cleanup warnings time=\"2026-03-02T13:11:44Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 2 13:11:44.919760 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b75803adfa3e2f231a942e7481d1c9c2bea45c25ab0ec03cca1b4005cc098deb-rootfs.mount: Deactivated successfully. Mar 2 13:11:45.040530 kubelet[2576]: E0302 13:11:45.037531 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:11:45.070731 containerd[1451]: time="2026-03-02T13:11:45.069759659Z" level=info msg="CreateContainer within sandbox \"1dbf9fb1368660eb9ffba79dbc577f665db5d309453e2dd4b6295e4178c76a7a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 2 13:11:45.240149 containerd[1451]: time="2026-03-02T13:11:45.238384282Z" level=info msg="CreateContainer within sandbox \"1dbf9fb1368660eb9ffba79dbc577f665db5d309453e2dd4b6295e4178c76a7a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"872f63b71db63eb9fa2f7cfec18573ad120a67a322df818aa785b320b21b4f68\"" Mar 2 13:11:45.263108 containerd[1451]: time="2026-03-02T13:11:45.261605680Z" level=info msg="StartContainer for \"872f63b71db63eb9fa2f7cfec18573ad120a67a322df818aa785b320b21b4f68\"" Mar 2 13:11:45.359049 systemd[1]: Started cri-containerd-872f63b71db63eb9fa2f7cfec18573ad120a67a322df818aa785b320b21b4f68.scope - libcontainer container 872f63b71db63eb9fa2f7cfec18573ad120a67a322df818aa785b320b21b4f68. Mar 2 13:11:45.449771 containerd[1451]: time="2026-03-02T13:11:45.449345503Z" level=info msg="StartContainer for \"872f63b71db63eb9fa2f7cfec18573ad120a67a322df818aa785b320b21b4f68\" returns successfully" Mar 2 13:11:45.454100 systemd[1]: cri-containerd-872f63b71db63eb9fa2f7cfec18573ad120a67a322df818aa785b320b21b4f68.scope: Deactivated successfully. Mar 2 13:11:45.547229 containerd[1451]: time="2026-03-02T13:11:45.546966694Z" level=info msg="shim disconnected" id=872f63b71db63eb9fa2f7cfec18573ad120a67a322df818aa785b320b21b4f68 namespace=k8s.io Mar 2 13:11:45.547229 containerd[1451]: time="2026-03-02T13:11:45.547068054Z" level=warning msg="cleaning up after shim disconnected" id=872f63b71db63eb9fa2f7cfec18573ad120a67a322df818aa785b320b21b4f68 namespace=k8s.io Mar 2 13:11:45.547229 containerd[1451]: time="2026-03-02T13:11:45.547082350Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:11:45.583574 containerd[1451]: time="2026-03-02T13:11:45.583133827Z" level=warning msg="cleanup warnings time=\"2026-03-02T13:11:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 2 13:11:45.917082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-872f63b71db63eb9fa2f7cfec18573ad120a67a322df818aa785b320b21b4f68-rootfs.mount: Deactivated successfully. Mar 2 13:11:46.052435 kubelet[2576]: E0302 13:11:46.052377 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:11:46.069119 containerd[1451]: time="2026-03-02T13:11:46.068619004Z" level=info msg="CreateContainer within sandbox \"1dbf9fb1368660eb9ffba79dbc577f665db5d309453e2dd4b6295e4178c76a7a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 2 13:11:46.100225 containerd[1451]: time="2026-03-02T13:11:46.100082560Z" level=info msg="CreateContainer within sandbox \"1dbf9fb1368660eb9ffba79dbc577f665db5d309453e2dd4b6295e4178c76a7a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c503a858eb6852e9b19fbbacc0489f4caba9ad510c696f74f381b2bf48292463\"" Mar 2 13:11:46.103086 containerd[1451]: time="2026-03-02T13:11:46.102028931Z" level=info msg="StartContainer for \"c503a858eb6852e9b19fbbacc0489f4caba9ad510c696f74f381b2bf48292463\"" Mar 2 13:11:46.166206 systemd[1]: Started cri-containerd-c503a858eb6852e9b19fbbacc0489f4caba9ad510c696f74f381b2bf48292463.scope - libcontainer container c503a858eb6852e9b19fbbacc0489f4caba9ad510c696f74f381b2bf48292463. Mar 2 13:11:46.227701 systemd[1]: cri-containerd-c503a858eb6852e9b19fbbacc0489f4caba9ad510c696f74f381b2bf48292463.scope: Deactivated successfully. Mar 2 13:11:46.235393 containerd[1451]: time="2026-03-02T13:11:46.235285347Z" level=info msg="StartContainer for \"c503a858eb6852e9b19fbbacc0489f4caba9ad510c696f74f381b2bf48292463\" returns successfully" Mar 2 13:11:46.315940 containerd[1451]: time="2026-03-02T13:11:46.314941204Z" level=info msg="shim disconnected" id=c503a858eb6852e9b19fbbacc0489f4caba9ad510c696f74f381b2bf48292463 namespace=k8s.io Mar 2 13:11:46.315940 containerd[1451]: time="2026-03-02T13:11:46.315044580Z" level=warning msg="cleaning up after shim disconnected" id=c503a858eb6852e9b19fbbacc0489f4caba9ad510c696f74f381b2bf48292463 namespace=k8s.io Mar 2 13:11:46.315940 containerd[1451]: time="2026-03-02T13:11:46.315057043Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:11:46.361288 containerd[1451]: time="2026-03-02T13:11:46.360702405Z" level=warning msg="cleanup warnings time=\"2026-03-02T13:11:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 2 13:11:46.916326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c503a858eb6852e9b19fbbacc0489f4caba9ad510c696f74f381b2bf48292463-rootfs.mount: Deactivated successfully. Mar 2 13:11:47.076131 kubelet[2576]: E0302 13:11:47.076080 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:11:47.098073 containerd[1451]: time="2026-03-02T13:11:47.097749147Z" level=info msg="CreateContainer within sandbox \"1dbf9fb1368660eb9ffba79dbc577f665db5d309453e2dd4b6295e4178c76a7a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 2 13:11:47.167637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3173964324.mount: Deactivated successfully. Mar 2 13:11:47.175078 containerd[1451]: time="2026-03-02T13:11:47.174964526Z" level=info msg="CreateContainer within sandbox \"1dbf9fb1368660eb9ffba79dbc577f665db5d309453e2dd4b6295e4178c76a7a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9d1b4997b70d7085d7bbc3b927d8788240cbc68f26d75256c37b9bf103322aa2\"" Mar 2 13:11:47.179025 containerd[1451]: time="2026-03-02T13:11:47.177734089Z" level=info msg="StartContainer for \"9d1b4997b70d7085d7bbc3b927d8788240cbc68f26d75256c37b9bf103322aa2\"" Mar 2 13:11:47.275041 systemd[1]: Started cri-containerd-9d1b4997b70d7085d7bbc3b927d8788240cbc68f26d75256c37b9bf103322aa2.scope - libcontainer container 9d1b4997b70d7085d7bbc3b927d8788240cbc68f26d75256c37b9bf103322aa2. Mar 2 13:11:47.375560 containerd[1451]: time="2026-03-02T13:11:47.375345387Z" level=info msg="StartContainer for \"9d1b4997b70d7085d7bbc3b927d8788240cbc68f26d75256c37b9bf103322aa2\" returns successfully" Mar 2 13:11:47.392380 kubelet[2576]: E0302 13:11:47.391953 2576 kubelet.go:3012] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:11:47.916813 systemd[1]: run-containerd-runc-k8s.io-9d1b4997b70d7085d7bbc3b927d8788240cbc68f26d75256c37b9bf103322aa2-runc.hjQr1N.mount: Deactivated successfully. Mar 2 13:11:48.091582 kubelet[2576]: E0302 13:11:48.091466 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:11:48.137930 kubelet[2576]: I0302 13:11:48.136410 2576 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vzjh4" podStartSLOduration=6.136392067 podStartE2EDuration="6.136392067s" podCreationTimestamp="2026-03-02 13:11:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:11:48.132344646 +0000 UTC m=+211.799927790" watchObservedRunningTime="2026-03-02 13:11:48.136392067 +0000 UTC m=+211.803975191" Mar 2 13:11:48.336183 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 2 13:11:49.096751 kubelet[2576]: E0302 13:11:49.096564 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:11:51.560528 kubelet[2576]: I0302 13:11:51.560364 2576 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-02T13:11:51Z","lastTransitionTime":"2026-03-02T13:11:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 2 13:11:53.410138 systemd-networkd[1374]: lxc_health: Link UP Mar 2 13:11:53.424377 systemd-networkd[1374]: lxc_health: Gained carrier Mar 2 13:11:54.819204 systemd-networkd[1374]: lxc_health: Gained IPv6LL Mar 2 13:11:55.021543 kubelet[2576]: E0302 13:11:55.021461 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:11:55.130331 kubelet[2576]: E0302 13:11:55.129213 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:11:56.135511 kubelet[2576]: E0302 13:11:56.135251 2576 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:11:59.082435 sshd[4663]: pam_unix(sshd:session): session closed for user core Mar 2 13:11:59.088786 systemd-logind[1433]: Session 37 logged out. Waiting for processes to exit. Mar 2 13:11:59.090575 systemd[1]: sshd@36-10.0.0.56:22-10.0.0.1:34266.service: Deactivated successfully. Mar 2 13:11:59.095557 systemd[1]: session-37.scope: Deactivated successfully. Mar 2 13:11:59.104299 systemd-logind[1433]: Removed session 37.