Mar 2 13:15:12.423707 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 2 11:01:37 -00 2026 Mar 2 13:15:12.423736 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:15:12.423754 kernel: BIOS-provided physical RAM map: Mar 2 13:15:12.423763 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 2 13:15:12.423772 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 2 13:15:12.423781 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 2 13:15:12.423792 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 2 13:15:12.423801 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 2 13:15:12.423811 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 2 13:15:12.423823 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 2 13:15:12.423832 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 2 13:15:12.423842 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 2 13:15:12.423851 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 2 13:15:12.423861 kernel: NX (Execute Disable) protection: active Mar 2 13:15:12.423872 kernel: APIC: Static calls initialized Mar 2 13:15:12.423886 kernel: SMBIOS 2.8 present. Mar 2 13:15:12.423896 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 2 13:15:12.423906 kernel: Hypervisor detected: KVM Mar 2 13:15:12.423916 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 2 13:15:12.423926 kernel: kvm-clock: using sched offset of 8160864154 cycles Mar 2 13:15:12.423937 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 2 13:15:12.423947 kernel: tsc: Detected 2445.426 MHz processor Mar 2 13:15:12.423958 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 2 13:15:12.423969 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 2 13:15:12.423983 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 2 13:15:12.423993 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 2 13:15:12.424003 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 2 13:15:12.424014 kernel: Using GB pages for direct mapping Mar 2 13:15:12.424024 kernel: ACPI: Early table checksum verification disabled Mar 2 13:15:12.424034 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 2 13:15:12.424044 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:15:12.424054 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:15:12.424125 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:15:12.424139 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 2 13:15:12.424149 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:15:12.424159 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:15:12.424169 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:15:12.424180 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 2 13:15:12.424190 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Mar 2 13:15:12.424200 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Mar 2 13:15:12.424216 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 2 13:15:12.424231 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Mar 2 13:15:12.424241 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Mar 2 13:15:12.424253 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Mar 2 13:15:12.424344 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Mar 2 13:15:12.424357 kernel: No NUMA configuration found Mar 2 13:15:12.424368 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 2 13:15:12.424384 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 2 13:15:12.424394 kernel: Zone ranges: Mar 2 13:15:12.424403 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 2 13:15:12.424412 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 2 13:15:12.424421 kernel: Normal empty Mar 2 13:15:12.424430 kernel: Movable zone start for each node Mar 2 13:15:12.424439 kernel: Early memory node ranges Mar 2 13:15:12.424450 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 2 13:15:12.424461 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 2 13:15:12.424472 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 2 13:15:12.424485 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 2 13:15:12.424494 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 2 13:15:12.424503 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 2 13:15:12.424513 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 2 13:15:12.424525 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 2 13:15:12.424534 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 2 13:15:12.424658 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 2 13:15:12.424673 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 2 13:15:12.424684 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 2 13:15:12.424700 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 2 13:15:12.424711 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 2 13:15:12.424722 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 2 13:15:12.424733 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 2 13:15:12.424744 kernel: TSC deadline timer available Mar 2 13:15:12.424754 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 2 13:15:12.424765 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 2 13:15:12.424776 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 2 13:15:12.424786 kernel: kvm-guest: setup PV sched yield Mar 2 13:15:12.424802 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 2 13:15:12.424812 kernel: Booting paravirtualized kernel on KVM Mar 2 13:15:12.424823 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 2 13:15:12.424834 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 2 13:15:12.424845 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Mar 2 13:15:12.424856 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Mar 2 13:15:12.424867 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 2 13:15:12.424878 kernel: kvm-guest: PV spinlocks enabled Mar 2 13:15:12.424889 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 2 13:15:12.424905 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:15:12.424917 kernel: random: crng init done Mar 2 13:15:12.424928 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 2 13:15:12.424939 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 2 13:15:12.424950 kernel: Fallback order for Node 0: 0 Mar 2 13:15:12.424961 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 2 13:15:12.424971 kernel: Policy zone: DMA32 Mar 2 13:15:12.424981 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 2 13:15:12.424995 kernel: Memory: 2434604K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 136888K reserved, 0K cma-reserved) Mar 2 13:15:12.425005 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 2 13:15:12.425014 kernel: ftrace: allocating 37996 entries in 149 pages Mar 2 13:15:12.425025 kernel: ftrace: allocated 149 pages with 4 groups Mar 2 13:15:12.425035 kernel: Dynamic Preempt: voluntary Mar 2 13:15:12.425045 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 2 13:15:12.425057 kernel: rcu: RCU event tracing is enabled. Mar 2 13:15:12.425141 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 2 13:15:12.425151 kernel: Trampoline variant of Tasks RCU enabled. Mar 2 13:15:12.425166 kernel: Rude variant of Tasks RCU enabled. Mar 2 13:15:12.425176 kernel: Tracing variant of Tasks RCU enabled. Mar 2 13:15:12.425186 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 2 13:15:12.425196 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 2 13:15:12.425207 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 2 13:15:12.425217 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 2 13:15:12.425228 kernel: Console: colour VGA+ 80x25 Mar 2 13:15:12.425240 kernel: printk: console [ttyS0] enabled Mar 2 13:15:12.425251 kernel: ACPI: Core revision 20230628 Mar 2 13:15:12.425264 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 2 13:15:12.425273 kernel: APIC: Switch to symmetric I/O mode setup Mar 2 13:15:12.425282 kernel: x2apic enabled Mar 2 13:15:12.425291 kernel: APIC: Switched APIC routing to: physical x2apic Mar 2 13:15:12.425301 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 2 13:15:12.425312 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 2 13:15:12.425324 kernel: kvm-guest: setup PV IPIs Mar 2 13:15:12.425336 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 2 13:15:12.425358 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 2 13:15:12.425368 kernel: Calibrating delay loop (skipped) preset value.. 4890.85 BogoMIPS (lpj=2445426) Mar 2 13:15:12.425377 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 2 13:15:12.425389 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 2 13:15:12.425405 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 2 13:15:12.425417 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 2 13:15:12.425426 kernel: Spectre V2 : Mitigation: Retpolines Mar 2 13:15:12.425436 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 2 13:15:12.425446 kernel: Speculative Store Bypass: Vulnerable Mar 2 13:15:12.425460 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 2 13:15:12.425475 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 2 13:15:12.425484 kernel: active return thunk: srso_alias_return_thunk Mar 2 13:15:12.425494 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 2 13:15:12.425504 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Mar 2 13:15:12.425513 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Mar 2 13:15:12.425524 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 2 13:15:12.425536 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 2 13:15:12.425671 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 2 13:15:12.425683 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 2 13:15:12.425695 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 2 13:15:12.425707 kernel: Freeing SMP alternatives memory: 32K Mar 2 13:15:12.425717 kernel: pid_max: default: 32768 minimum: 301 Mar 2 13:15:12.425727 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 2 13:15:12.425736 kernel: landlock: Up and running. Mar 2 13:15:12.425746 kernel: SELinux: Initializing. Mar 2 13:15:12.425758 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:15:12.425773 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 2 13:15:12.425783 kernel: smpboot: CPU0: AMD EPYC 7763 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Mar 2 13:15:12.425792 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:15:12.425802 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:15:12.425814 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 2 13:15:12.425825 kernel: Performance Events: PMU not available due to virtualization, using software events only. Mar 2 13:15:12.425836 kernel: signal: max sigframe size: 1776 Mar 2 13:15:12.425847 kernel: rcu: Hierarchical SRCU implementation. Mar 2 13:15:12.425859 kernel: rcu: Max phase no-delay instances is 400. Mar 2 13:15:12.425875 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 2 13:15:12.425886 kernel: smp: Bringing up secondary CPUs ... Mar 2 13:15:12.425896 kernel: smpboot: x86: Booting SMP configuration: Mar 2 13:15:12.425908 kernel: .... node #0, CPUs: #1 #2 #3 Mar 2 13:15:12.425918 kernel: smp: Brought up 1 node, 4 CPUs Mar 2 13:15:12.425930 kernel: smpboot: Max logical packages: 1 Mar 2 13:15:12.425941 kernel: smpboot: Total of 4 processors activated (19563.40 BogoMIPS) Mar 2 13:15:12.425952 kernel: devtmpfs: initialized Mar 2 13:15:12.425963 kernel: x86/mm: Memory block size: 128MB Mar 2 13:15:12.425978 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 2 13:15:12.425990 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 2 13:15:12.426001 kernel: pinctrl core: initialized pinctrl subsystem Mar 2 13:15:12.426012 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 2 13:15:12.426022 kernel: audit: initializing netlink subsys (disabled) Mar 2 13:15:12.426034 kernel: audit: type=2000 audit(1772457308.803:1): state=initialized audit_enabled=0 res=1 Mar 2 13:15:12.426045 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 2 13:15:12.426056 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 2 13:15:12.426136 kernel: cpuidle: using governor menu Mar 2 13:15:12.426152 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 2 13:15:12.426163 kernel: dca service started, version 1.12.1 Mar 2 13:15:12.426175 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 2 13:15:12.426186 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 2 13:15:12.426197 kernel: PCI: Using configuration type 1 for base access Mar 2 13:15:12.426208 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 2 13:15:12.426219 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 2 13:15:12.426229 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 2 13:15:12.426240 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 2 13:15:12.426255 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 2 13:15:12.426266 kernel: ACPI: Added _OSI(Module Device) Mar 2 13:15:12.426277 kernel: ACPI: Added _OSI(Processor Device) Mar 2 13:15:12.426288 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 2 13:15:12.426299 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 2 13:15:12.426311 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 2 13:15:12.426324 kernel: ACPI: Interpreter enabled Mar 2 13:15:12.426333 kernel: ACPI: PM: (supports S0 S3 S5) Mar 2 13:15:12.426343 kernel: ACPI: Using IOAPIC for interrupt routing Mar 2 13:15:12.426357 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 2 13:15:12.426366 kernel: PCI: Using E820 reservations for host bridge windows Mar 2 13:15:12.426376 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 2 13:15:12.426389 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 2 13:15:12.426735 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 2 13:15:12.426930 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 2 13:15:12.427139 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 2 13:15:12.427156 kernel: PCI host bridge to bus 0000:00 Mar 2 13:15:12.427312 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 2 13:15:12.427483 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 2 13:15:12.427748 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 2 13:15:12.427914 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 2 13:15:12.428163 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 2 13:15:12.428318 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 2 13:15:12.428477 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 2 13:15:12.428759 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 2 13:15:12.428942 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 2 13:15:12.429175 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 2 13:15:12.429342 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 2 13:15:12.429521 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 2 13:15:12.430140 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 2 13:15:12.430350 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 2 13:15:12.430527 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 2 13:15:12.430791 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 2 13:15:12.430961 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 2 13:15:12.431236 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 2 13:15:12.431427 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 2 13:15:12.431735 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 2 13:15:12.431919 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 2 13:15:12.432181 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 2 13:15:12.432353 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 2 13:15:12.432525 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 2 13:15:12.432801 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 2 13:15:12.432982 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 2 13:15:12.433235 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 2 13:15:12.433408 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 2 13:15:12.433769 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 2 13:15:12.433951 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 2 13:15:12.434309 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 2 13:15:12.434491 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 2 13:15:12.434863 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 2 13:15:12.434886 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 2 13:15:12.434897 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 2 13:15:12.434909 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 2 13:15:12.434921 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 2 13:15:12.434932 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 2 13:15:12.434939 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 2 13:15:12.434946 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 2 13:15:12.434953 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 2 13:15:12.434960 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 2 13:15:12.434971 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 2 13:15:12.434978 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 2 13:15:12.434985 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 2 13:15:12.434991 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 2 13:15:12.434998 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 2 13:15:12.435005 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 2 13:15:12.435012 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 2 13:15:12.435019 kernel: iommu: Default domain type: Translated Mar 2 13:15:12.435025 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 2 13:15:12.435034 kernel: PCI: Using ACPI for IRQ routing Mar 2 13:15:12.435041 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 2 13:15:12.435048 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 2 13:15:12.435054 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 2 13:15:12.435298 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 2 13:15:12.435455 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 2 13:15:12.435794 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 2 13:15:12.435810 kernel: vgaarb: loaded Mar 2 13:15:12.435825 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 2 13:15:12.435834 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 2 13:15:12.435844 kernel: clocksource: Switched to clocksource kvm-clock Mar 2 13:15:12.435857 kernel: VFS: Disk quotas dquot_6.6.0 Mar 2 13:15:12.435868 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 2 13:15:12.435877 kernel: pnp: PnP ACPI init Mar 2 13:15:12.436051 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 2 13:15:12.436211 kernel: pnp: PnP ACPI: found 6 devices Mar 2 13:15:12.436228 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 2 13:15:12.436238 kernel: NET: Registered PF_INET protocol family Mar 2 13:15:12.436248 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 2 13:15:12.436257 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 2 13:15:12.436267 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 2 13:15:12.436277 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 2 13:15:12.436287 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 2 13:15:12.436298 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 2 13:15:12.436308 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:15:12.436323 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 2 13:15:12.436334 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 2 13:15:12.436344 kernel: NET: Registered PF_XDP protocol family Mar 2 13:15:12.436499 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 2 13:15:12.437346 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 2 13:15:12.437522 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 2 13:15:12.437806 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 2 13:15:12.437933 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 2 13:15:12.438176 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 2 13:15:12.438189 kernel: PCI: CLS 0 bytes, default 64 Mar 2 13:15:12.438196 kernel: Initialise system trusted keyrings Mar 2 13:15:12.438204 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 2 13:15:12.438211 kernel: Key type asymmetric registered Mar 2 13:15:12.438218 kernel: Asymmetric key parser 'x509' registered Mar 2 13:15:12.438225 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 2 13:15:12.438232 kernel: io scheduler mq-deadline registered Mar 2 13:15:12.438239 kernel: io scheduler kyber registered Mar 2 13:15:12.438246 kernel: io scheduler bfq registered Mar 2 13:15:12.438257 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 2 13:15:12.438265 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 2 13:15:12.438272 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 2 13:15:12.438280 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 2 13:15:12.438287 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 2 13:15:12.438294 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 2 13:15:12.438301 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 2 13:15:12.438308 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 2 13:15:12.438315 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 2 13:15:12.438501 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 2 13:15:12.438524 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 2 13:15:12.438755 kernel: rtc_cmos 00:04: registered as rtc0 Mar 2 13:15:12.438877 kernel: rtc_cmos 00:04: setting system clock to 2026-03-02T13:15:11 UTC (1772457311) Mar 2 13:15:12.438993 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 2 13:15:12.439002 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 2 13:15:12.439010 kernel: NET: Registered PF_INET6 protocol family Mar 2 13:15:12.439022 kernel: Segment Routing with IPv6 Mar 2 13:15:12.439029 kernel: In-situ OAM (IOAM) with IPv6 Mar 2 13:15:12.439037 kernel: NET: Registered PF_PACKET protocol family Mar 2 13:15:12.439049 kernel: Key type dns_resolver registered Mar 2 13:15:12.439111 kernel: IPI shorthand broadcast: enabled Mar 2 13:15:12.439121 kernel: sched_clock: Marking stable (2174033233, 746412295)->(3305419826, -384974298) Mar 2 13:15:12.439129 kernel: registered taskstats version 1 Mar 2 13:15:12.439136 kernel: Loading compiled-in X.509 certificates Mar 2 13:15:12.439143 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: adc4961784537911a77ff0c4d6bd9b9639a51d45' Mar 2 13:15:12.439155 kernel: Key type .fscrypt registered Mar 2 13:15:12.439166 kernel: Key type fscrypt-provisioning registered Mar 2 13:15:12.439179 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 2 13:15:12.439190 kernel: ima: Allocated hash algorithm: sha1 Mar 2 13:15:12.439197 kernel: ima: No architecture policies found Mar 2 13:15:12.439204 kernel: clk: Disabling unused clocks Mar 2 13:15:12.439212 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 2 13:15:12.439219 kernel: Write protecting the kernel read-only data: 36864k Mar 2 13:15:12.439226 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 2 13:15:12.439236 kernel: Run /init as init process Mar 2 13:15:12.439244 kernel: with arguments: Mar 2 13:15:12.439251 kernel: /init Mar 2 13:15:12.439258 kernel: with environment: Mar 2 13:15:12.439265 kernel: HOME=/ Mar 2 13:15:12.439272 kernel: TERM=linux Mar 2 13:15:12.439281 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:15:12.439291 systemd[1]: Detected virtualization kvm. Mar 2 13:15:12.439301 systemd[1]: Detected architecture x86-64. Mar 2 13:15:12.439308 systemd[1]: Running in initrd. Mar 2 13:15:12.439315 systemd[1]: No hostname configured, using default hostname. Mar 2 13:15:12.439322 systemd[1]: Hostname set to . Mar 2 13:15:12.439330 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:15:12.439343 systemd[1]: Queued start job for default target initrd.target. Mar 2 13:15:12.439357 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:15:12.439368 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:15:12.439383 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 2 13:15:12.439394 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:15:12.439404 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 2 13:15:12.439415 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 2 13:15:12.439432 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 2 13:15:12.439443 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 2 13:15:12.439453 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:15:12.439468 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:15:12.439478 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:15:12.439491 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:15:12.439504 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:15:12.439526 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:15:12.439537 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:15:12.439713 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:15:12.439725 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 2 13:15:12.439733 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 2 13:15:12.439741 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:15:12.439748 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:15:12.439756 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:15:12.439764 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:15:12.439772 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 2 13:15:12.439779 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:15:12.439792 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 2 13:15:12.439800 systemd[1]: Starting systemd-fsck-usr.service... Mar 2 13:15:12.439807 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:15:12.439815 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:15:12.439823 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:15:12.439859 systemd-journald[194]: Collecting audit messages is disabled. Mar 2 13:15:12.439881 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 2 13:15:12.439890 systemd-journald[194]: Journal started Mar 2 13:15:12.439906 systemd-journald[194]: Runtime Journal (/run/log/journal/37e81d34423f41089efb4a0bda7e93a4) is 6.0M, max 48.4M, 42.3M free. Mar 2 13:15:12.448833 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:15:12.453271 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:15:12.454230 systemd[1]: Finished systemd-fsck-usr.service. Mar 2 13:15:12.475242 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:15:12.776455 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 2 13:15:12.776500 kernel: Bridge firewalling registered Mar 2 13:15:12.484301 systemd-modules-load[195]: Inserted module 'overlay' Mar 2 13:15:12.538492 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 2 13:15:12.778830 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:15:12.799216 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:15:12.822911 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:15:12.830946 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:15:12.848703 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:15:12.885156 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:15:12.886782 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:15:12.897784 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:15:12.922237 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:15:12.935978 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:15:12.949639 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:15:12.978963 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 2 13:15:12.987445 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:15:13.003290 dracut-cmdline[230]: dracut-dracut-053 Mar 2 13:15:13.003290 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=5b1ae8ad250cf3ddd00dc7c63ded260e5b82ee29f2cdc578a6ade4cab26e6a0b Mar 2 13:15:13.078376 systemd-resolved[235]: Positive Trust Anchors: Mar 2 13:15:13.078443 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:15:13.078485 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:15:13.081827 systemd-resolved[235]: Defaulting to hostname 'linux'. Mar 2 13:15:13.083725 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:15:13.090278 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:15:13.202722 kernel: SCSI subsystem initialized Mar 2 13:15:13.218691 kernel: Loading iSCSI transport class v2.0-870. Mar 2 13:15:13.240747 kernel: iscsi: registered transport (tcp) Mar 2 13:15:13.270629 kernel: iscsi: registered transport (qla4xxx) Mar 2 13:15:13.270731 kernel: QLogic iSCSI HBA Driver Mar 2 13:15:13.336426 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 2 13:15:13.352977 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 2 13:15:13.392117 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 2 13:15:13.392164 kernel: device-mapper: uevent: version 1.0.3 Mar 2 13:15:13.396469 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 2 13:15:13.454701 kernel: raid6: avx2x4 gen() 24539 MB/s Mar 2 13:15:13.473831 kernel: raid6: avx2x2 gen() 28123 MB/s Mar 2 13:15:13.496994 kernel: raid6: avx2x1 gen() 20203 MB/s Mar 2 13:15:13.497058 kernel: raid6: using algorithm avx2x2 gen() 28123 MB/s Mar 2 13:15:13.520912 kernel: raid6: .... xor() 22477 MB/s, rmw enabled Mar 2 13:15:13.521014 kernel: raid6: using avx2x2 recovery algorithm Mar 2 13:15:13.553716 kernel: xor: automatically using best checksumming function avx Mar 2 13:15:13.799977 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 2 13:15:13.826242 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:15:13.854877 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:15:13.876861 systemd-udevd[418]: Using default interface naming scheme 'v255'. Mar 2 13:15:13.885317 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:15:13.895829 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 2 13:15:13.927959 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Mar 2 13:15:13.984814 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:15:14.005365 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:15:14.118292 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:15:14.142893 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 2 13:15:14.173918 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 2 13:15:14.180211 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:15:14.186983 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:15:14.206208 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:15:14.238728 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 2 13:15:14.245725 kernel: cryptd: max_cpu_qlen set to 1000 Mar 2 13:15:14.245924 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 2 13:15:14.281210 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 2 13:15:14.281651 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 2 13:15:14.281670 kernel: GPT:9289727 != 19775487 Mar 2 13:15:14.281681 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 2 13:15:14.281694 kernel: GPT:9289727 != 19775487 Mar 2 13:15:14.281703 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 2 13:15:14.281713 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:15:14.292426 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:15:14.316734 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:15:14.318448 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:15:14.381247 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Mar 2 13:15:14.381283 kernel: BTRFS: device fsid a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 devid 1 transid 35 /dev/vda3 scanned by (udev-worker) (462) Mar 2 13:15:14.335671 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:15:14.349774 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:15:14.350222 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:15:14.409886 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:15:14.438232 kernel: libata version 3.00 loaded. Mar 2 13:15:14.440367 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:15:14.456223 kernel: ahci 0000:00:1f.2: version 3.0 Mar 2 13:15:14.459950 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 2 13:15:14.476830 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 2 13:15:14.477161 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 2 13:15:14.478974 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 2 13:15:14.503897 kernel: scsi host0: ahci Mar 2 13:15:14.504376 kernel: scsi host1: ahci Mar 2 13:15:14.504747 kernel: AVX2 version of gcm_enc/dec engaged. Mar 2 13:15:14.507696 kernel: scsi host2: ahci Mar 2 13:15:14.514768 kernel: scsi host3: ahci Mar 2 13:15:14.515394 kernel: AES CTR mode by8 optimization enabled Mar 2 13:15:14.516752 kernel: scsi host4: ahci Mar 2 13:15:14.517190 kernel: scsi host5: ahci Mar 2 13:15:14.521249 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 31 Mar 2 13:15:14.521281 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 31 Mar 2 13:15:14.521298 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 31 Mar 2 13:15:14.521316 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 31 Mar 2 13:15:14.521331 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 31 Mar 2 13:15:14.521354 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 31 Mar 2 13:15:14.520517 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 2 13:15:14.933762 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 2 13:15:14.933795 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 2 13:15:14.933810 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 2 13:15:14.933824 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 2 13:15:14.933837 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 2 13:15:14.934447 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 2 13:15:14.934476 kernel: ata3.00: applying bridge limits Mar 2 13:15:14.934490 kernel: ata3.00: configured for UDMA/100 Mar 2 13:15:14.934504 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 2 13:15:14.934517 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 2 13:15:14.944234 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:15:14.961956 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:15:14.984944 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 2 13:15:15.015906 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 2 13:15:15.016361 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 2 13:15:15.015741 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 2 13:15:15.053523 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 2 13:15:15.056533 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 2 13:15:15.080681 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:15:15.065162 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 2 13:15:15.081448 disk-uuid[568]: Primary Header is updated. Mar 2 13:15:15.081448 disk-uuid[568]: Secondary Entries is updated. Mar 2 13:15:15.081448 disk-uuid[568]: Secondary Header is updated. Mar 2 13:15:15.106377 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:15:15.119837 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:15:15.136858 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:15:15.147944 kernel: block device autoloading is deprecated and will be removed. Mar 2 13:15:16.136378 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 2 13:15:16.138272 disk-uuid[570]: The operation has completed successfully. Mar 2 13:15:16.192481 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 2 13:15:16.192732 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 2 13:15:16.238687 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 2 13:15:16.253314 sh[598]: Success Mar 2 13:15:16.290431 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 2 13:15:16.381278 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 2 13:15:16.409444 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 2 13:15:16.420922 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 2 13:15:16.461787 kernel: BTRFS info (device dm-0): first mount of filesystem a0930b2b-aeed-42a5-bf2f-ec141dfc71d3 Mar 2 13:15:16.461826 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:15:16.461865 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 2 13:15:16.461889 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 2 13:15:16.461906 kernel: BTRFS info (device dm-0): using free space tree Mar 2 13:15:16.495897 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 2 13:15:16.499790 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 2 13:15:16.515277 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 2 13:15:16.521959 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 2 13:15:16.553866 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:15:16.553922 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:15:16.553942 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:15:16.575682 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:15:16.590872 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 2 13:15:16.602511 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:15:16.628162 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 2 13:15:16.645078 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 2 13:15:16.750252 ignition[698]: Ignition 2.19.0 Mar 2 13:15:16.750264 ignition[698]: Stage: fetch-offline Mar 2 13:15:16.750331 ignition[698]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:15:16.750348 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:15:16.750476 ignition[698]: parsed url from cmdline: "" Mar 2 13:15:16.750484 ignition[698]: no config URL provided Mar 2 13:15:16.750492 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Mar 2 13:15:16.750510 ignition[698]: no config at "/usr/lib/ignition/user.ign" Mar 2 13:15:16.775918 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:15:16.750667 ignition[698]: op(1): [started] loading QEMU firmware config module Mar 2 13:15:16.750679 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 2 13:15:16.763677 ignition[698]: op(1): [finished] loading QEMU firmware config module Mar 2 13:15:16.803485 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:15:16.847804 systemd-networkd[786]: lo: Link UP Mar 2 13:15:16.847849 systemd-networkd[786]: lo: Gained carrier Mar 2 13:15:16.850764 systemd-networkd[786]: Enumeration completed Mar 2 13:15:16.852263 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:15:16.852267 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:15:16.853743 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:15:16.855227 systemd-networkd[786]: eth0: Link UP Mar 2 13:15:16.855235 systemd-networkd[786]: eth0: Gained carrier Mar 2 13:15:16.855246 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:15:16.862802 systemd[1]: Reached target network.target - Network. Mar 2 13:15:16.892853 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.96/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 13:15:17.088694 ignition[698]: parsing config with SHA512: 4d0228f109177591f7e7eea724895060e44c4e753628dfb610771707c38f724d7050b9ffad6c111b11be9d1144eceb4785a175cd7d4bf1b11e2da6a03b315d88 Mar 2 13:15:17.098193 unknown[698]: fetched base config from "system" Mar 2 13:15:17.098216 unknown[698]: fetched user config from "qemu" Mar 2 13:15:17.101303 ignition[698]: fetch-offline: fetch-offline passed Mar 2 13:15:17.105269 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:15:17.101658 ignition[698]: Ignition finished successfully Mar 2 13:15:17.123938 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 2 13:15:17.152929 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 2 13:15:17.185725 ignition[790]: Ignition 2.19.0 Mar 2 13:15:17.185774 ignition[790]: Stage: kargs Mar 2 13:15:17.186001 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:15:17.186014 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:15:17.187843 ignition[790]: kargs: kargs passed Mar 2 13:15:17.187915 ignition[790]: Ignition finished successfully Mar 2 13:15:17.211532 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 2 13:15:17.223864 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 2 13:15:17.258502 ignition[798]: Ignition 2.19.0 Mar 2 13:15:17.258536 ignition[798]: Stage: disks Mar 2 13:15:17.259215 ignition[798]: no configs at "/usr/lib/ignition/base.d" Mar 2 13:15:17.259229 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:15:17.260528 ignition[798]: disks: disks passed Mar 2 13:15:17.260634 ignition[798]: Ignition finished successfully Mar 2 13:15:17.278937 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 2 13:15:17.283529 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 2 13:15:17.296124 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 2 13:15:17.296295 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:15:17.305306 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:15:17.316265 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:15:17.339366 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 2 13:15:17.368909 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 2 13:15:17.378199 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 2 13:15:17.413860 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 2 13:15:17.617803 kernel: EXT4-fs (vda9): mounted filesystem 84e86976-7918-44d3-a6f5-d0f90ce6c152 r/w with ordered data mode. Quota mode: none. Mar 2 13:15:17.618963 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 2 13:15:17.619946 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 2 13:15:17.646872 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:15:17.656006 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 2 13:15:17.656761 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 2 13:15:17.683185 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Mar 2 13:15:17.656830 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 2 13:15:17.704928 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:15:17.704968 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:15:17.704986 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:15:17.656871 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:15:17.717533 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:15:17.726946 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:15:17.741941 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 2 13:15:17.756888 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 2 13:15:17.838466 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Mar 2 13:15:17.852710 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Mar 2 13:15:17.864653 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Mar 2 13:15:17.874921 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Mar 2 13:15:18.071831 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 2 13:15:18.094792 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 2 13:15:18.101861 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 2 13:15:18.116027 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 2 13:15:18.126523 kernel: BTRFS info (device vda6): last unmount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:15:18.148842 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 2 13:15:18.166384 ignition[930]: INFO : Ignition 2.19.0 Mar 2 13:15:18.166384 ignition[930]: INFO : Stage: mount Mar 2 13:15:18.179900 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:15:18.179900 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:15:18.179900 ignition[930]: INFO : mount: mount passed Mar 2 13:15:18.179900 ignition[930]: INFO : Ignition finished successfully Mar 2 13:15:18.170252 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 2 13:15:18.190858 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 2 13:15:18.299006 systemd-networkd[786]: eth0: Gained IPv6LL Mar 2 13:15:18.638048 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 2 13:15:18.653686 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (942) Mar 2 13:15:18.661392 kernel: BTRFS info (device vda6): first mount of filesystem 59abb777-1ea9-43fd-8326-9ccf988e79fa Mar 2 13:15:18.661475 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 2 13:15:18.661503 kernel: BTRFS info (device vda6): using free space tree Mar 2 13:15:18.675735 kernel: BTRFS info (device vda6): auto enabling async discard Mar 2 13:15:18.677959 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 2 13:15:18.718850 ignition[959]: INFO : Ignition 2.19.0 Mar 2 13:15:18.718850 ignition[959]: INFO : Stage: files Mar 2 13:15:18.725368 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:15:18.725368 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:15:18.725368 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Mar 2 13:15:18.725368 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 2 13:15:18.725368 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 2 13:15:18.749068 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 2 13:15:18.749068 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 2 13:15:18.749068 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 2 13:15:18.749068 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 13:15:18.749068 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 2 13:15:18.727679 unknown[959]: wrote ssh authorized keys file for user: core Mar 2 13:15:18.810260 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 2 13:15:18.935263 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 2 13:15:18.935263 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 2 13:15:18.949375 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Mar 2 13:15:19.104758 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 2 13:15:19.272231 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Mar 2 13:15:19.272231 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 2 13:15:19.285202 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 2 13:15:19.285202 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:15:19.285202 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 2 13:15:19.285202 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:15:19.285202 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 2 13:15:19.285202 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:15:19.285202 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 2 13:15:19.285202 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:15:19.285202 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 2 13:15:19.285202 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 13:15:19.285202 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 13:15:19.285202 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 13:15:19.285202 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 2 13:15:19.527213 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 2 13:15:20.109818 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 2 13:15:20.109818 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Mar 2 13:15:20.128661 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:15:20.140197 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 2 13:15:20.140197 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Mar 2 13:15:20.140197 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Mar 2 13:15:20.164046 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 13:15:20.164046 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 2 13:15:20.164046 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Mar 2 13:15:20.164046 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Mar 2 13:15:20.278165 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 13:15:20.290235 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 2 13:15:20.297180 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Mar 2 13:15:20.297180 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 2 13:15:20.297180 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 2 13:15:20.297180 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:15:20.297180 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 2 13:15:20.297180 ignition[959]: INFO : files: files passed Mar 2 13:15:20.297180 ignition[959]: INFO : Ignition finished successfully Mar 2 13:15:20.308048 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 2 13:15:20.345393 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 2 13:15:20.351912 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 2 13:15:20.370341 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 2 13:15:20.370751 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 2 13:15:20.384692 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Mar 2 13:15:20.389966 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:15:20.389966 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:15:20.412495 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 2 13:15:20.390061 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:15:20.402370 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 2 13:15:20.436307 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 2 13:15:20.486620 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 2 13:15:20.486891 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 2 13:15:20.501885 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 2 13:15:20.502087 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 2 13:15:20.503422 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 2 13:15:20.505011 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 2 13:15:20.537153 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:15:20.561239 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 2 13:15:20.590445 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:15:20.590817 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:15:20.610253 systemd[1]: Stopped target timers.target - Timer Units. Mar 2 13:15:20.611688 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 2 13:15:20.611854 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 2 13:15:20.618256 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 2 13:15:20.621323 systemd[1]: Stopped target basic.target - Basic System. Mar 2 13:15:20.622392 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 2 13:15:20.649442 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 2 13:15:20.652665 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 2 13:15:20.670282 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 2 13:15:20.675039 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 2 13:15:20.687725 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 2 13:15:20.705446 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 2 13:15:20.709298 systemd[1]: Stopped target swap.target - Swaps. Mar 2 13:15:20.714201 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 2 13:15:20.714402 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 2 13:15:20.730198 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:15:20.747896 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:15:20.753401 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 2 13:15:20.754055 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:15:20.764038 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 2 13:15:20.764270 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 2 13:15:20.792331 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 2 13:15:20.792716 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 2 13:15:20.800240 systemd[1]: Stopped target paths.target - Path Units. Mar 2 13:15:20.804712 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 2 13:15:20.809330 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:15:20.818769 systemd[1]: Stopped target slices.target - Slice Units. Mar 2 13:15:20.818944 systemd[1]: Stopped target sockets.target - Socket Units. Mar 2 13:15:20.823609 systemd[1]: iscsid.socket: Deactivated successfully. Mar 2 13:15:20.823797 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 2 13:15:20.829718 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 2 13:15:20.829864 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 2 13:15:20.843987 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 2 13:15:20.844361 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 2 13:15:20.866534 systemd[1]: ignition-files.service: Deactivated successfully. Mar 2 13:15:20.866786 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 2 13:15:20.894062 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 2 13:15:20.929828 ignition[1014]: INFO : Ignition 2.19.0 Mar 2 13:15:20.929828 ignition[1014]: INFO : Stage: umount Mar 2 13:15:20.929828 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 2 13:15:20.929828 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 2 13:15:20.929828 ignition[1014]: INFO : umount: umount passed Mar 2 13:15:20.929828 ignition[1014]: INFO : Ignition finished successfully Mar 2 13:15:20.905898 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 2 13:15:20.911134 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 2 13:15:20.911413 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:15:20.930027 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 2 13:15:20.930460 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 2 13:15:20.956288 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 2 13:15:20.957236 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 2 13:15:20.957385 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 2 13:15:20.964736 systemd[1]: Stopped target network.target - Network. Mar 2 13:15:20.971464 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 2 13:15:20.971652 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 2 13:15:20.980336 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 2 13:15:20.980495 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 2 13:15:20.992267 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 2 13:15:20.992369 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 2 13:15:21.001046 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 2 13:15:21.001197 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 2 13:15:21.017482 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 2 13:15:21.074007 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 2 13:15:21.084332 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 2 13:15:21.084629 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 2 13:15:21.088673 systemd-networkd[786]: eth0: DHCPv6 lease lost Mar 2 13:15:21.103175 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 2 13:15:21.107852 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 2 13:15:21.118883 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 2 13:15:21.123446 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 2 13:15:21.138685 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 2 13:15:21.143625 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 2 13:15:21.158759 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 2 13:15:21.158874 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:15:21.174812 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 2 13:15:21.174920 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 2 13:15:21.200722 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 2 13:15:21.204798 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 2 13:15:21.204882 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 2 13:15:21.229152 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:15:21.229271 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:15:21.241999 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 2 13:15:21.242087 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 2 13:15:21.259004 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 2 13:15:21.263427 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:15:21.279533 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:15:21.310797 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 2 13:15:21.316289 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:15:21.327965 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 2 13:15:21.328166 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 2 13:15:21.348686 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 2 13:15:21.348790 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 2 13:15:21.361486 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 2 13:15:21.366154 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:15:21.374146 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 2 13:15:21.374269 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 2 13:15:21.387060 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 2 13:15:21.387421 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 2 13:15:21.405459 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 2 13:15:21.406173 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 2 13:15:21.434451 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 2 13:15:21.440748 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 2 13:15:21.440872 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:15:21.453338 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 2 13:15:21.453437 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:15:21.457640 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 2 13:15:21.458528 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:15:21.485725 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 2 13:15:21.485830 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:15:21.500664 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 2 13:15:21.500876 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 2 13:15:21.513285 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 2 13:15:21.545052 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 2 13:15:21.563986 systemd[1]: Switching root. Mar 2 13:15:21.601721 systemd-journald[194]: Journal stopped Mar 2 13:15:23.646752 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 2 13:15:23.646851 kernel: SELinux: policy capability network_peer_controls=1 Mar 2 13:15:23.646871 kernel: SELinux: policy capability open_perms=1 Mar 2 13:15:23.646887 kernel: SELinux: policy capability extended_socket_class=1 Mar 2 13:15:23.646909 kernel: SELinux: policy capability always_check_network=0 Mar 2 13:15:23.646926 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 2 13:15:23.646942 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 2 13:15:23.646957 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 2 13:15:23.646978 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 2 13:15:23.646994 kernel: audit: type=1403 audit(1772457321.911:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 2 13:15:23.647011 systemd[1]: Successfully loaded SELinux policy in 83.028ms. Mar 2 13:15:23.647043 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.376ms. Mar 2 13:15:23.647062 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 2 13:15:23.647079 systemd[1]: Detected virtualization kvm. Mar 2 13:15:23.647096 systemd[1]: Detected architecture x86-64. Mar 2 13:15:23.647178 systemd[1]: Detected first boot. Mar 2 13:15:23.647203 systemd[1]: Initializing machine ID from VM UUID. Mar 2 13:15:23.647223 zram_generator::config[1058]: No configuration found. Mar 2 13:15:23.647253 systemd[1]: Populated /etc with preset unit settings. Mar 2 13:15:23.647272 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 2 13:15:23.647291 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 2 13:15:23.647311 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 2 13:15:23.647327 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 2 13:15:23.647350 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 2 13:15:23.647368 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 2 13:15:23.647384 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 2 13:15:23.647404 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 2 13:15:23.647421 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 2 13:15:23.647439 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 2 13:15:23.647456 systemd[1]: Created slice user.slice - User and Session Slice. Mar 2 13:15:23.647473 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 2 13:15:23.647490 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 2 13:15:23.647512 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 2 13:15:23.647528 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 2 13:15:23.647608 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 2 13:15:23.647628 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 2 13:15:23.647646 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 2 13:15:23.647662 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 2 13:15:23.647679 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 2 13:15:23.647697 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 2 13:15:23.647715 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 2 13:15:23.647737 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 2 13:15:23.647754 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 2 13:15:23.647771 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 2 13:15:23.647788 systemd[1]: Reached target slices.target - Slice Units. Mar 2 13:15:23.647805 systemd[1]: Reached target swap.target - Swaps. Mar 2 13:15:23.647822 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 2 13:15:23.647839 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 2 13:15:23.647856 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 2 13:15:23.647877 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 2 13:15:23.647896 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 2 13:15:23.647914 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 2 13:15:23.647935 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 2 13:15:23.647955 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 2 13:15:23.647974 systemd[1]: Mounting media.mount - External Media Directory... Mar 2 13:15:23.647995 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:15:23.648069 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 2 13:15:23.648088 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 2 13:15:23.648109 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 2 13:15:23.648178 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 2 13:15:23.648198 systemd[1]: Reached target machines.target - Containers. Mar 2 13:15:23.648215 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 2 13:15:23.648232 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:15:23.648249 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 2 13:15:23.648268 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 2 13:15:23.648285 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:15:23.648306 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:15:23.649176 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:15:23.649354 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 2 13:15:23.649375 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:15:23.649395 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 2 13:15:23.649411 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 2 13:15:23.649694 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 2 13:15:23.649717 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 2 13:15:23.649741 systemd[1]: Stopped systemd-fsck-usr.service. Mar 2 13:15:23.649758 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 2 13:15:23.649775 kernel: fuse: init (API version 7.39) Mar 2 13:15:23.649793 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 2 13:15:23.649810 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 2 13:15:23.649826 kernel: loop: module loaded Mar 2 13:15:23.649848 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 2 13:15:23.649867 kernel: ACPI: bus type drm_connector registered Mar 2 13:15:23.649884 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 2 13:15:23.649933 systemd-journald[1128]: Collecting audit messages is disabled. Mar 2 13:15:23.649971 systemd[1]: verity-setup.service: Deactivated successfully. Mar 2 13:15:23.649989 systemd[1]: Stopped verity-setup.service. Mar 2 13:15:23.650006 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:15:23.650024 systemd-journald[1128]: Journal started Mar 2 13:15:23.650053 systemd-journald[1128]: Runtime Journal (/run/log/journal/37e81d34423f41089efb4a0bda7e93a4) is 6.0M, max 48.4M, 42.3M free. Mar 2 13:15:22.915073 systemd[1]: Queued start job for default target multi-user.target. Mar 2 13:15:22.942912 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 2 13:15:22.943787 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 2 13:15:22.944360 systemd[1]: systemd-journald.service: Consumed 1.867s CPU time. Mar 2 13:15:23.667090 systemd[1]: Started systemd-journald.service - Journal Service. Mar 2 13:15:23.669061 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 2 13:15:23.673398 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 2 13:15:23.678727 systemd[1]: Mounted media.mount - External Media Directory. Mar 2 13:15:23.682969 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 2 13:15:23.687424 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 2 13:15:23.691992 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 2 13:15:23.696277 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 2 13:15:23.701432 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 2 13:15:23.708061 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 2 13:15:23.708394 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 2 13:15:23.716965 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:15:23.717412 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:15:23.724357 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:15:23.724748 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:15:23.731278 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:15:23.731803 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:15:23.740697 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 2 13:15:23.741029 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 2 13:15:23.747401 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:15:23.747834 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:15:23.754013 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 2 13:15:23.760870 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 2 13:15:23.769480 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 2 13:15:23.799621 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 2 13:15:23.825206 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 2 13:15:23.835738 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 2 13:15:23.840034 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 2 13:15:23.840184 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 2 13:15:23.845684 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 2 13:15:23.853696 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 2 13:15:23.866865 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 2 13:15:23.874032 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:15:23.877104 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 2 13:15:23.883715 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 2 13:15:23.889626 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:15:23.892430 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 2 13:15:23.897945 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:15:23.907056 systemd-journald[1128]: Time spent on flushing to /var/log/journal/37e81d34423f41089efb4a0bda7e93a4 is 21.224ms for 946 entries. Mar 2 13:15:23.907056 systemd-journald[1128]: System Journal (/var/log/journal/37e81d34423f41089efb4a0bda7e93a4) is 8.0M, max 195.6M, 187.6M free. Mar 2 13:15:23.943418 systemd-journald[1128]: Received client request to flush runtime journal. Mar 2 13:15:23.903785 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:15:23.914472 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 2 13:15:23.934082 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 2 13:15:23.942253 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 2 13:15:23.949176 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 2 13:15:23.957388 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 2 13:15:23.972748 kernel: loop0: detected capacity change from 0 to 142488 Mar 2 13:15:23.965858 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 2 13:15:23.976078 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 2 13:15:23.984049 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 2 13:15:23.996803 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:15:24.009094 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 2 13:15:24.015246 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Mar 2 13:15:24.015271 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Mar 2 13:15:24.022051 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 2 13:15:24.029457 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 2 13:15:24.037834 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 2 13:15:24.045281 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 2 13:15:24.062879 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 2 13:15:24.079877 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 2 13:15:24.084210 kernel: loop1: detected capacity change from 0 to 140768 Mar 2 13:15:24.084905 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 2 13:15:24.087905 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 2 13:15:24.125313 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 2 13:15:24.139010 kernel: loop2: detected capacity change from 0 to 228704 Mar 2 13:15:24.143006 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 2 13:15:24.188979 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Mar 2 13:15:24.189009 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Mar 2 13:15:24.196284 kernel: loop3: detected capacity change from 0 to 142488 Mar 2 13:15:24.199443 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 2 13:15:24.244762 kernel: loop4: detected capacity change from 0 to 140768 Mar 2 13:15:24.284712 kernel: loop5: detected capacity change from 0 to 228704 Mar 2 13:15:24.311844 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 2 13:15:24.312843 (sd-merge)[1198]: Merged extensions into '/usr'. Mar 2 13:15:24.321352 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Mar 2 13:15:24.321411 systemd[1]: Reloading... Mar 2 13:15:24.401713 zram_generator::config[1228]: No configuration found. Mar 2 13:15:24.523526 ldconfig[1167]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 2 13:15:24.571333 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:15:24.626760 systemd[1]: Reloading finished in 304 ms. Mar 2 13:15:24.675293 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 2 13:15:24.681028 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 2 13:15:24.687043 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 2 13:15:24.717347 systemd[1]: Starting ensure-sysext.service... Mar 2 13:15:24.727320 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 2 13:15:24.736430 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 2 13:15:24.743753 systemd[1]: Reloading requested from client PID 1263 ('systemctl') (unit ensure-sysext.service)... Mar 2 13:15:24.743905 systemd[1]: Reloading... Mar 2 13:15:24.773031 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 2 13:15:24.774691 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 2 13:15:24.776708 systemd-tmpfiles[1264]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 2 13:15:24.777315 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Mar 2 13:15:24.777451 systemd-tmpfiles[1264]: ACLs are not supported, ignoring. Mar 2 13:15:24.784898 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:15:24.784919 systemd-tmpfiles[1264]: Skipping /boot Mar 2 13:15:24.793880 systemd-udevd[1265]: Using default interface naming scheme 'v255'. Mar 2 13:15:24.826459 systemd-tmpfiles[1264]: Detected autofs mount point /boot during canonicalization of boot. Mar 2 13:15:24.826723 systemd-tmpfiles[1264]: Skipping /boot Mar 2 13:15:24.831697 zram_generator::config[1291]: No configuration found. Mar 2 13:15:24.956641 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1322) Mar 2 13:15:25.001693 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 2 13:15:25.028623 kernel: ACPI: button: Power Button [PWRF] Mar 2 13:15:25.043440 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:15:25.061688 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 2 13:15:25.062103 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 2 13:15:25.064708 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 2 13:15:25.071050 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 2 13:15:25.185721 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 2 13:15:25.193709 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 2 13:15:25.194038 systemd[1]: Reloading finished in 448 ms. Mar 2 13:15:25.368631 kernel: mousedev: PS/2 mouse device common for all mice Mar 2 13:15:25.399189 kernel: kvm_amd: TSC scaling supported Mar 2 13:15:25.399288 kernel: kvm_amd: Nested Virtualization enabled Mar 2 13:15:25.399313 kernel: kvm_amd: Nested Paging enabled Mar 2 13:15:25.403076 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 2 13:15:25.403171 kernel: kvm_amd: PMU virtualization is disabled Mar 2 13:15:25.404228 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 2 13:15:25.459409 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 2 13:15:25.476659 kernel: EDAC MC: Ver: 3.0.0 Mar 2 13:15:25.493517 systemd[1]: Finished ensure-sysext.service. Mar 2 13:15:25.510437 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 2 13:15:25.534430 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:15:25.549095 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:15:25.557188 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 2 13:15:25.561998 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 2 13:15:25.564902 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 2 13:15:25.572218 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 2 13:15:25.579406 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 2 13:15:25.586858 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 2 13:15:25.596681 lvm[1365]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:15:25.600885 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 2 13:15:25.608936 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 2 13:15:25.611819 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 2 13:15:25.621400 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 2 13:15:25.635757 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 2 13:15:25.642835 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 2 13:15:25.648842 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 2 13:15:25.653193 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 2 13:15:25.657647 augenrules[1386]: No rules Mar 2 13:15:25.659350 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 2 13:15:25.661030 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 2 13:15:25.663764 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:15:25.664914 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 2 13:15:25.665906 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 2 13:15:25.666869 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 2 13:15:25.667733 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 2 13:15:25.668788 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 2 13:15:25.679991 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 2 13:15:25.680414 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 2 13:15:25.685025 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 2 13:15:25.685356 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 2 13:15:25.706702 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 2 13:15:25.718057 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 2 13:15:25.731114 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 2 13:15:25.731309 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 2 13:15:25.731403 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 2 13:15:25.739192 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 2 13:15:25.740443 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 2 13:15:25.742315 lvm[1404]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 2 13:15:25.743045 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 2 13:15:25.758671 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 2 13:15:25.788206 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 2 13:15:25.792837 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 2 13:15:25.834494 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 2 13:15:25.870097 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 2 13:15:25.872894 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 2 13:15:25.975377 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 2 13:15:25.986482 systemd-networkd[1385]: lo: Link UP Mar 2 13:15:25.986496 systemd-networkd[1385]: lo: Gained carrier Mar 2 13:15:25.990377 systemd-networkd[1385]: Enumeration completed Mar 2 13:15:25.992036 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:15:25.992043 systemd-networkd[1385]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 2 13:15:25.996388 systemd-networkd[1385]: eth0: Link UP Mar 2 13:15:25.996422 systemd-networkd[1385]: eth0: Gained carrier Mar 2 13:15:25.996445 systemd-networkd[1385]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 2 13:15:26.002207 systemd-resolved[1389]: Positive Trust Anchors: Mar 2 13:15:26.002251 systemd-resolved[1389]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 2 13:15:26.002294 systemd-resolved[1389]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 2 13:15:26.016487 systemd-resolved[1389]: Defaulting to hostname 'linux'. Mar 2 13:15:26.018680 systemd-networkd[1385]: eth0: DHCPv4 address 10.0.0.96/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 2 13:15:26.024842 systemd-timesyncd[1391]: Network configuration changed, trying to establish connection. Mar 2 13:15:26.026696 systemd-timesyncd[1391]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 2 13:15:26.026792 systemd-timesyncd[1391]: Initial clock synchronization to Mon 2026-03-02 13:15:26.414646 UTC. Mar 2 13:15:26.142271 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 2 13:15:26.150302 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 2 13:15:26.157990 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 2 13:15:26.168109 systemd[1]: Reached target network.target - Network. Mar 2 13:15:26.173919 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 2 13:15:26.188522 systemd[1]: Reached target sysinit.target - System Initialization. Mar 2 13:15:26.199395 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 2 13:15:26.208691 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 2 13:15:26.217030 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 2 13:15:26.222967 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 2 13:15:26.223039 systemd[1]: Reached target paths.target - Path Units. Mar 2 13:15:26.230916 systemd[1]: Reached target time-set.target - System Time Set. Mar 2 13:15:26.235791 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 2 13:15:26.239914 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 2 13:15:26.244696 systemd[1]: Reached target timers.target - Timer Units. Mar 2 13:15:26.251765 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 2 13:15:26.261350 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 2 13:15:26.278928 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 2 13:15:26.287692 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 2 13:15:26.304215 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 2 13:15:26.310448 systemd[1]: Reached target sockets.target - Socket Units. Mar 2 13:15:26.317519 systemd[1]: Reached target basic.target - Basic System. Mar 2 13:15:26.322694 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:15:26.322764 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 2 13:15:26.332961 systemd[1]: Starting containerd.service - containerd container runtime... Mar 2 13:15:26.345381 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 2 13:15:26.363061 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 2 13:15:26.372432 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 2 13:15:26.384720 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 2 13:15:26.387640 jq[1430]: false Mar 2 13:15:26.390514 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 2 13:15:26.399835 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 2 13:15:26.419269 extend-filesystems[1431]: Found loop3 Mar 2 13:15:26.419269 extend-filesystems[1431]: Found loop4 Mar 2 13:15:26.419269 extend-filesystems[1431]: Found loop5 Mar 2 13:15:26.419269 extend-filesystems[1431]: Found sr0 Mar 2 13:15:26.419269 extend-filesystems[1431]: Found vda Mar 2 13:15:26.419269 extend-filesystems[1431]: Found vda1 Mar 2 13:15:26.419269 extend-filesystems[1431]: Found vda2 Mar 2 13:15:26.419269 extend-filesystems[1431]: Found vda3 Mar 2 13:15:26.419269 extend-filesystems[1431]: Found usr Mar 2 13:15:26.419269 extend-filesystems[1431]: Found vda4 Mar 2 13:15:26.419269 extend-filesystems[1431]: Found vda6 Mar 2 13:15:26.419269 extend-filesystems[1431]: Found vda7 Mar 2 13:15:26.419269 extend-filesystems[1431]: Found vda9 Mar 2 13:15:26.419269 extend-filesystems[1431]: Checking size of /dev/vda9 Mar 2 13:15:26.520736 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (1322) Mar 2 13:15:26.520777 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 2 13:15:26.413885 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 2 13:15:26.443199 dbus-daemon[1429]: [system] SELinux support is enabled Mar 2 13:15:26.521401 extend-filesystems[1431]: Resized partition /dev/vda9 Mar 2 13:15:26.421790 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 2 13:15:26.541684 extend-filesystems[1448]: resize2fs 1.47.1 (20-May-2024) Mar 2 13:15:26.447920 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 2 13:15:26.451254 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 2 13:15:26.452055 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 2 13:15:26.460898 systemd[1]: Starting update-engine.service - Update Engine... Mar 2 13:15:26.564094 update_engine[1445]: I20260302 13:15:26.521966 1445 main.cc:92] Flatcar Update Engine starting Mar 2 13:15:26.564094 update_engine[1445]: I20260302 13:15:26.525497 1445 update_check_scheduler.cc:74] Next update check in 11m7s Mar 2 13:15:26.496937 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 2 13:15:26.564780 jq[1450]: true Mar 2 13:15:26.508088 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 2 13:15:26.520807 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 2 13:15:26.521103 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 2 13:15:26.521659 systemd[1]: motdgen.service: Deactivated successfully. Mar 2 13:15:26.521847 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 2 13:15:26.543442 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 2 13:15:26.543811 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 2 13:15:26.585237 jq[1456]: true Mar 2 13:15:26.587887 systemd-logind[1441]: Watching system buttons on /dev/input/event1 (Power Button) Mar 2 13:15:26.587948 systemd-logind[1441]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 2 13:15:26.588513 systemd-logind[1441]: New seat seat0. Mar 2 13:15:26.598790 systemd[1]: Started systemd-logind.service - User Login Management. Mar 2 13:15:26.599416 (ntainerd)[1459]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 2 13:15:26.608810 dbus-daemon[1429]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 2 13:15:26.614946 tar[1454]: linux-amd64/LICENSE Mar 2 13:15:26.615275 tar[1454]: linux-amd64/helm Mar 2 13:15:26.626723 systemd[1]: Started update-engine.service - Update Engine. Mar 2 13:15:26.632871 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 2 13:15:26.646879 sshd_keygen[1452]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 2 13:15:26.633166 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 2 13:15:26.639961 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 2 13:15:26.640186 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 2 13:15:26.659642 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 2 13:15:26.662960 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 2 13:15:26.687618 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 2 13:15:26.702770 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 2 13:15:26.709241 extend-filesystems[1448]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 2 13:15:26.709241 extend-filesystems[1448]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 2 13:15:26.709241 extend-filesystems[1448]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 2 13:15:26.752090 extend-filesystems[1431]: Resized filesystem in /dev/vda9 Mar 2 13:15:26.715813 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 2 13:15:26.716363 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 2 13:15:26.739447 systemd[1]: issuegen.service: Deactivated successfully. Mar 2 13:15:26.740231 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 2 13:15:26.761271 bash[1488]: Updated "/home/core/.ssh/authorized_keys" Mar 2 13:15:26.774657 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 2 13:15:26.781952 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 2 13:15:26.785777 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 2 13:15:26.796193 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 2 13:15:26.819890 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 2 13:15:26.838332 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 2 13:15:26.850032 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 2 13:15:26.856382 systemd[1]: Reached target getty.target - Login Prompts. Mar 2 13:15:27.066054 containerd[1459]: time="2026-03-02T13:15:27.065844955Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 2 13:15:27.115096 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 2 13:15:27.121518 containerd[1459]: time="2026-03-02T13:15:27.121424650Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:15:27.125112 containerd[1459]: time="2026-03-02T13:15:27.124922145Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:15:27.125112 containerd[1459]: time="2026-03-02T13:15:27.124963407Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 2 13:15:27.125112 containerd[1459]: time="2026-03-02T13:15:27.124984805Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 2 13:15:27.125246 containerd[1459]: time="2026-03-02T13:15:27.125214808Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 2 13:15:27.125283 containerd[1459]: time="2026-03-02T13:15:27.125246101Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 2 13:15:27.125872 containerd[1459]: time="2026-03-02T13:15:27.125342829Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:15:27.125872 containerd[1459]: time="2026-03-02T13:15:27.125367778Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:15:27.125872 containerd[1459]: time="2026-03-02T13:15:27.125695558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:15:27.125872 containerd[1459]: time="2026-03-02T13:15:27.125755236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 2 13:15:27.125872 containerd[1459]: time="2026-03-02T13:15:27.125779503Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:15:27.125872 containerd[1459]: time="2026-03-02T13:15:27.125794419Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 2 13:15:27.126115 containerd[1459]: time="2026-03-02T13:15:27.125919100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:15:27.126889 containerd[1459]: time="2026-03-02T13:15:27.126301946Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 2 13:15:27.126889 containerd[1459]: time="2026-03-02T13:15:27.126453310Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 2 13:15:27.126889 containerd[1459]: time="2026-03-02T13:15:27.126471378Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 2 13:15:27.126889 containerd[1459]: time="2026-03-02T13:15:27.126667103Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 2 13:15:27.126889 containerd[1459]: time="2026-03-02T13:15:27.126766626Z" level=info msg="metadata content store policy set" policy=shared Mar 2 13:15:27.145159 systemd[1]: Started sshd@0-10.0.0.96:22-10.0.0.1:40164.service - OpenSSH per-connection server daemon (10.0.0.1:40164). Mar 2 13:15:27.160663 containerd[1459]: time="2026-03-02T13:15:27.159099408Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 2 13:15:27.160663 containerd[1459]: time="2026-03-02T13:15:27.159220434Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 2 13:15:27.160663 containerd[1459]: time="2026-03-02T13:15:27.159251286Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 2 13:15:27.160663 containerd[1459]: time="2026-03-02T13:15:27.159276487Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 2 13:15:27.160663 containerd[1459]: time="2026-03-02T13:15:27.159310995Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 2 13:15:27.160663 containerd[1459]: time="2026-03-02T13:15:27.160260644Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 2 13:15:27.160663 containerd[1459]: time="2026-03-02T13:15:27.160463417Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 2 13:15:27.161003 containerd[1459]: time="2026-03-02T13:15:27.160691529Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 2 13:15:27.161003 containerd[1459]: time="2026-03-02T13:15:27.160710426Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 2 13:15:27.161003 containerd[1459]: time="2026-03-02T13:15:27.160724524Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 2 13:15:27.161003 containerd[1459]: time="2026-03-02T13:15:27.160779990Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 2 13:15:27.161003 containerd[1459]: time="2026-03-02T13:15:27.160800484Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 2 13:15:27.161003 containerd[1459]: time="2026-03-02T13:15:27.160821809Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 2 13:15:27.161003 containerd[1459]: time="2026-03-02T13:15:27.160839437Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 2 13:15:27.161715 containerd[1459]: time="2026-03-02T13:15:27.161435478Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 2 13:15:27.161715 containerd[1459]: time="2026-03-02T13:15:27.161461077Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 2 13:15:27.161715 containerd[1459]: time="2026-03-02T13:15:27.161476131Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 2 13:15:27.161715 containerd[1459]: time="2026-03-02T13:15:27.161488106Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 2 13:15:27.161715 containerd[1459]: time="2026-03-02T13:15:27.161507813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 2 13:15:27.161715 containerd[1459]: time="2026-03-02T13:15:27.161520881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 2 13:15:27.161715 containerd[1459]: time="2026-03-02T13:15:27.161533161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 2 13:15:27.161715 containerd[1459]: time="2026-03-02T13:15:27.161548267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 2 13:15:27.161715 containerd[1459]: time="2026-03-02T13:15:27.161559697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 2 13:15:27.161715 containerd[1459]: time="2026-03-02T13:15:27.161572292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 2 13:15:27.161715 containerd[1459]: time="2026-03-02T13:15:27.161686310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 2 13:15:27.161715 containerd[1459]: time="2026-03-02T13:15:27.161703958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 2 13:15:27.161715 containerd[1459]: time="2026-03-02T13:15:27.161718140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 2 13:15:27.163967 containerd[1459]: time="2026-03-02T13:15:27.161731681Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 2 13:15:27.163967 containerd[1459]: time="2026-03-02T13:15:27.161743541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 2 13:15:27.163967 containerd[1459]: time="2026-03-02T13:15:27.161755390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 2 13:15:27.163967 containerd[1459]: time="2026-03-02T13:15:27.161766724Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 2 13:15:27.163967 containerd[1459]: time="2026-03-02T13:15:27.161813440Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 2 13:15:27.163967 containerd[1459]: time="2026-03-02T13:15:27.161842789Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 2 13:15:27.163967 containerd[1459]: time="2026-03-02T13:15:27.161862234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 2 13:15:27.163967 containerd[1459]: time="2026-03-02T13:15:27.161879209Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 2 13:15:27.163967 containerd[1459]: time="2026-03-02T13:15:27.161947763Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 2 13:15:27.163967 containerd[1459]: time="2026-03-02T13:15:27.161974446Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 2 13:15:27.163967 containerd[1459]: time="2026-03-02T13:15:27.161992388Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 2 13:15:27.163967 containerd[1459]: time="2026-03-02T13:15:27.162012001Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 2 13:15:27.163967 containerd[1459]: time="2026-03-02T13:15:27.162028041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 2 13:15:27.164375 containerd[1459]: time="2026-03-02T13:15:27.162047065Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 2 13:15:27.164375 containerd[1459]: time="2026-03-02T13:15:27.162060806Z" level=info msg="NRI interface is disabled by configuration." Mar 2 13:15:27.164375 containerd[1459]: time="2026-03-02T13:15:27.162071647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 2 13:15:27.164486 containerd[1459]: time="2026-03-02T13:15:27.162509318Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 2 13:15:27.164486 containerd[1459]: time="2026-03-02T13:15:27.162573692Z" level=info msg="Connect containerd service" Mar 2 13:15:27.164486 containerd[1459]: time="2026-03-02T13:15:27.162872300Z" level=info msg="using legacy CRI server" Mar 2 13:15:27.164486 containerd[1459]: time="2026-03-02T13:15:27.162887994Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 2 13:15:27.164486 containerd[1459]: time="2026-03-02T13:15:27.163028675Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 2 13:15:27.164486 containerd[1459]: time="2026-03-02T13:15:27.164362393Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:15:27.165857 containerd[1459]: time="2026-03-02T13:15:27.164623395Z" level=info msg="Start subscribing containerd event" Mar 2 13:15:27.165857 containerd[1459]: time="2026-03-02T13:15:27.164829414Z" level=info msg="Start recovering state" Mar 2 13:15:27.165857 containerd[1459]: time="2026-03-02T13:15:27.165195632Z" level=info msg="Start event monitor" Mar 2 13:15:27.165857 containerd[1459]: time="2026-03-02T13:15:27.165238103Z" level=info msg="Start snapshots syncer" Mar 2 13:15:27.165857 containerd[1459]: time="2026-03-02T13:15:27.165250509Z" level=info msg="Start cni network conf syncer for default" Mar 2 13:15:27.165857 containerd[1459]: time="2026-03-02T13:15:27.165258545Z" level=info msg="Start streaming server" Mar 2 13:15:27.165857 containerd[1459]: time="2026-03-02T13:15:27.165853452Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 2 13:15:27.166200 containerd[1459]: time="2026-03-02T13:15:27.165929948Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 2 13:15:27.169035 containerd[1459]: time="2026-03-02T13:15:27.169010349Z" level=info msg="containerd successfully booted in 0.107100s" Mar 2 13:15:27.169270 systemd[1]: Started containerd.service - containerd container runtime. Mar 2 13:15:27.258274 sshd[1516]: Accepted publickey for core from 10.0.0.1 port 40164 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:15:27.261856 sshd[1516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:15:27.292541 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 2 13:15:27.325511 systemd-networkd[1385]: eth0: Gained IPv6LL Mar 2 13:15:27.325973 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 2 13:15:27.339981 systemd-logind[1441]: New session 1 of user core. Mar 2 13:15:27.348813 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 2 13:15:27.365731 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 2 13:15:27.377988 systemd[1]: Reached target network-online.target - Network is Online. Mar 2 13:15:27.398823 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 2 13:15:27.410813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:15:27.423022 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 2 13:15:27.432798 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 2 13:15:27.456962 (systemd)[1525]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 2 13:15:27.479700 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 2 13:15:27.485333 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 2 13:15:27.485901 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 2 13:15:27.494022 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 2 13:15:27.655842 systemd[1525]: Queued start job for default target default.target. Mar 2 13:15:27.666676 systemd[1525]: Created slice app.slice - User Application Slice. Mar 2 13:15:27.666713 systemd[1525]: Reached target paths.target - Paths. Mar 2 13:15:27.666738 systemd[1525]: Reached target timers.target - Timers. Mar 2 13:15:27.669714 systemd[1525]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 2 13:15:27.704304 systemd[1525]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 2 13:15:27.705397 systemd[1525]: Reached target sockets.target - Sockets. Mar 2 13:15:27.705421 systemd[1525]: Reached target basic.target - Basic System. Mar 2 13:15:27.705479 systemd[1525]: Reached target default.target - Main User Target. Mar 2 13:15:27.705533 systemd[1525]: Startup finished in 234ms. Mar 2 13:15:27.706228 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 2 13:15:27.718076 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 2 13:15:27.756732 tar[1454]: linux-amd64/README.md Mar 2 13:15:27.787041 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 2 13:15:27.828215 systemd[1]: Started sshd@1-10.0.0.96:22-10.0.0.1:40178.service - OpenSSH per-connection server daemon (10.0.0.1:40178). Mar 2 13:15:27.945387 sshd[1552]: Accepted publickey for core from 10.0.0.1 port 40178 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:15:27.950172 sshd[1552]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:15:27.968730 systemd-logind[1441]: New session 2 of user core. Mar 2 13:15:27.989837 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 2 13:15:28.079986 sshd[1552]: pam_unix(sshd:session): session closed for user core Mar 2 13:15:28.090871 systemd[1]: sshd@1-10.0.0.96:22-10.0.0.1:40178.service: Deactivated successfully. Mar 2 13:15:28.093149 systemd[1]: session-2.scope: Deactivated successfully. Mar 2 13:15:28.099927 systemd-logind[1441]: Session 2 logged out. Waiting for processes to exit. Mar 2 13:15:28.105400 systemd[1]: Started sshd@2-10.0.0.96:22-10.0.0.1:40190.service - OpenSSH per-connection server daemon (10.0.0.1:40190). Mar 2 13:15:28.112911 systemd-logind[1441]: Removed session 2. Mar 2 13:15:28.166119 sshd[1559]: Accepted publickey for core from 10.0.0.1 port 40190 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:15:28.168459 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:15:28.179808 systemd-logind[1441]: New session 3 of user core. Mar 2 13:15:28.193969 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 2 13:15:28.286711 sshd[1559]: pam_unix(sshd:session): session closed for user core Mar 2 13:15:28.299882 systemd[1]: sshd@2-10.0.0.96:22-10.0.0.1:40190.service: Deactivated successfully. Mar 2 13:15:28.303499 systemd[1]: session-3.scope: Deactivated successfully. Mar 2 13:15:28.305309 systemd-logind[1441]: Session 3 logged out. Waiting for processes to exit. Mar 2 13:15:28.307331 systemd-logind[1441]: Removed session 3. Mar 2 13:15:29.042805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:15:29.050933 (kubelet)[1570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:15:29.052267 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 2 13:15:29.062716 systemd[1]: Startup finished in 2.441s (kernel) + 9.992s (initrd) + 7.230s (userspace) = 19.664s. Mar 2 13:15:30.173553 kubelet[1570]: E0302 13:15:30.172777 1570 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:15:30.189001 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:15:30.189298 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:15:30.191852 systemd[1]: kubelet.service: Consumed 1.398s CPU time. Mar 2 13:15:38.624595 systemd[1]: Started sshd@3-10.0.0.96:22-10.0.0.1:51170.service - OpenSSH per-connection server daemon (10.0.0.1:51170). Mar 2 13:15:38.743040 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 51170 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:15:38.749100 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:15:38.768660 systemd-logind[1441]: New session 4 of user core. Mar 2 13:15:38.787010 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 2 13:15:38.907407 sshd[1585]: pam_unix(sshd:session): session closed for user core Mar 2 13:15:38.928768 systemd[1]: sshd@3-10.0.0.96:22-10.0.0.1:51170.service: Deactivated successfully. Mar 2 13:15:38.950313 systemd[1]: session-4.scope: Deactivated successfully. Mar 2 13:15:38.958672 systemd-logind[1441]: Session 4 logged out. Waiting for processes to exit. Mar 2 13:15:38.991296 systemd[1]: Started sshd@4-10.0.0.96:22-10.0.0.1:36702.service - OpenSSH per-connection server daemon (10.0.0.1:36702). Mar 2 13:15:39.010211 systemd-logind[1441]: Removed session 4. Mar 2 13:15:39.120965 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 36702 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:15:39.126367 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:15:39.151208 systemd-logind[1441]: New session 5 of user core. Mar 2 13:15:39.161947 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 2 13:15:39.247225 sshd[1592]: pam_unix(sshd:session): session closed for user core Mar 2 13:15:39.269866 systemd[1]: sshd@4-10.0.0.96:22-10.0.0.1:36702.service: Deactivated successfully. Mar 2 13:15:39.274356 systemd[1]: session-5.scope: Deactivated successfully. Mar 2 13:15:39.277230 systemd-logind[1441]: Session 5 logged out. Waiting for processes to exit. Mar 2 13:15:39.290044 systemd[1]: Started sshd@5-10.0.0.96:22-10.0.0.1:36716.service - OpenSSH per-connection server daemon (10.0.0.1:36716). Mar 2 13:15:39.293465 systemd-logind[1441]: Removed session 5. Mar 2 13:15:39.355968 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 36716 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:15:39.358267 sshd[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:15:39.381913 systemd-logind[1441]: New session 6 of user core. Mar 2 13:15:39.396008 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 2 13:15:39.484800 sshd[1599]: pam_unix(sshd:session): session closed for user core Mar 2 13:15:39.496197 systemd[1]: sshd@5-10.0.0.96:22-10.0.0.1:36716.service: Deactivated successfully. Mar 2 13:15:39.499077 systemd[1]: session-6.scope: Deactivated successfully. Mar 2 13:15:39.500613 systemd-logind[1441]: Session 6 logged out. Waiting for processes to exit. Mar 2 13:15:39.513210 systemd[1]: Started sshd@6-10.0.0.96:22-10.0.0.1:36726.service - OpenSSH per-connection server daemon (10.0.0.1:36726). Mar 2 13:15:39.517950 systemd-logind[1441]: Removed session 6. Mar 2 13:15:39.572597 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 36726 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:15:39.575541 sshd[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:15:39.595731 systemd-logind[1441]: New session 7 of user core. Mar 2 13:15:39.606975 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 2 13:15:39.715279 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 2 13:15:39.716181 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:15:39.749246 sudo[1609]: pam_unix(sudo:session): session closed for user root Mar 2 13:15:39.756390 sshd[1606]: pam_unix(sshd:session): session closed for user core Mar 2 13:15:39.778620 systemd[1]: sshd@6-10.0.0.96:22-10.0.0.1:36726.service: Deactivated successfully. Mar 2 13:15:39.781195 systemd[1]: session-7.scope: Deactivated successfully. Mar 2 13:15:39.787199 systemd-logind[1441]: Session 7 logged out. Waiting for processes to exit. Mar 2 13:15:39.803885 systemd[1]: Started sshd@7-10.0.0.96:22-10.0.0.1:36734.service - OpenSSH per-connection server daemon (10.0.0.1:36734). Mar 2 13:15:39.805373 systemd-logind[1441]: Removed session 7. Mar 2 13:15:39.852343 sshd[1614]: Accepted publickey for core from 10.0.0.1 port 36734 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:15:39.855298 sshd[1614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:15:39.868265 systemd-logind[1441]: New session 8 of user core. Mar 2 13:15:39.878708 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 2 13:15:39.973813 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 2 13:15:39.975745 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:15:39.987951 sudo[1618]: pam_unix(sudo:session): session closed for user root Mar 2 13:15:40.007712 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 2 13:15:40.008238 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:15:40.053627 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 2 13:15:40.065698 auditctl[1621]: No rules Mar 2 13:15:40.066737 systemd[1]: audit-rules.service: Deactivated successfully. Mar 2 13:15:40.067856 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 2 13:15:40.094267 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 2 13:15:40.177925 augenrules[1639]: No rules Mar 2 13:15:40.181032 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 2 13:15:40.183621 sudo[1617]: pam_unix(sudo:session): session closed for user root Mar 2 13:15:40.193143 sshd[1614]: pam_unix(sshd:session): session closed for user core Mar 2 13:15:40.211001 systemd[1]: sshd@7-10.0.0.96:22-10.0.0.1:36734.service: Deactivated successfully. Mar 2 13:15:40.214636 systemd[1]: session-8.scope: Deactivated successfully. Mar 2 13:15:40.219416 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 2 13:15:40.224822 systemd-logind[1441]: Session 8 logged out. Waiting for processes to exit. Mar 2 13:15:40.244914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:15:40.264458 systemd[1]: Started sshd@8-10.0.0.96:22-10.0.0.1:36740.service - OpenSSH per-connection server daemon (10.0.0.1:36740). Mar 2 13:15:40.266793 systemd-logind[1441]: Removed session 8. Mar 2 13:15:40.317960 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 36740 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:15:40.323405 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:15:40.347782 systemd-logind[1441]: New session 9 of user core. Mar 2 13:15:40.364362 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 2 13:15:40.458691 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 2 13:15:40.459369 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 2 13:15:40.605637 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:15:40.607320 (kubelet)[1664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:15:40.696680 kubelet[1664]: E0302 13:15:40.696456 1664 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:15:40.703858 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:15:40.704154 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:15:41.438473 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 2 13:15:41.440777 (dockerd)[1685]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 2 13:15:42.028910 dockerd[1685]: time="2026-03-02T13:15:42.028110267Z" level=info msg="Starting up" Mar 2 13:15:42.285510 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2003329839-merged.mount: Deactivated successfully. Mar 2 13:15:42.356893 dockerd[1685]: time="2026-03-02T13:15:42.356744548Z" level=info msg="Loading containers: start." Mar 2 13:15:42.699909 kernel: Initializing XFRM netlink socket Mar 2 13:15:43.006767 systemd-networkd[1385]: docker0: Link UP Mar 2 13:15:43.078617 dockerd[1685]: time="2026-03-02T13:15:43.078341831Z" level=info msg="Loading containers: done." Mar 2 13:15:43.120175 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2098079042-merged.mount: Deactivated successfully. Mar 2 13:15:43.132421 dockerd[1685]: time="2026-03-02T13:15:43.131999042Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 2 13:15:43.132668 dockerd[1685]: time="2026-03-02T13:15:43.132609331Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 2 13:15:43.133361 dockerd[1685]: time="2026-03-02T13:15:43.132775567Z" level=info msg="Daemon has completed initialization" Mar 2 13:15:43.218141 dockerd[1685]: time="2026-03-02T13:15:43.217870442Z" level=info msg="API listen on /run/docker.sock" Mar 2 13:15:43.218251 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 2 13:15:44.409976 kernel: hrtimer: interrupt took 3857383 ns Mar 2 13:15:45.578485 containerd[1459]: time="2026-03-02T13:15:45.577108538Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 2 13:15:46.661721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4256590261.mount: Deactivated successfully. Mar 2 13:15:50.742388 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 2 13:15:50.769917 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:15:51.291710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:15:51.314529 (kubelet)[1897]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:15:51.402766 kubelet[1897]: E0302 13:15:51.400293 1897 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:15:51.411169 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:15:51.411398 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:15:52.075670 containerd[1459]: time="2026-03-02T13:15:52.075509298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:15:52.079047 containerd[1459]: time="2026-03-02T13:15:52.078807446Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116186" Mar 2 13:15:52.081111 containerd[1459]: time="2026-03-02T13:15:52.081038931Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:15:52.087697 containerd[1459]: time="2026-03-02T13:15:52.087609177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:15:52.102392 containerd[1459]: time="2026-03-02T13:15:52.101482339Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 6.521579724s" Mar 2 13:15:52.102392 containerd[1459]: time="2026-03-02T13:15:52.102330317Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 2 13:15:52.106598 containerd[1459]: time="2026-03-02T13:15:52.106484608Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 2 13:15:54.320430 containerd[1459]: time="2026-03-02T13:15:54.320224632Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:15:54.322295 containerd[1459]: time="2026-03-02T13:15:54.322176501Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021810" Mar 2 13:15:54.324023 containerd[1459]: time="2026-03-02T13:15:54.323978232Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:15:54.331533 containerd[1459]: time="2026-03-02T13:15:54.330750736Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 2.224212582s" Mar 2 13:15:54.331533 containerd[1459]: time="2026-03-02T13:15:54.330865591Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 2 13:15:54.331533 containerd[1459]: time="2026-03-02T13:15:54.330772461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:15:54.332250 containerd[1459]: time="2026-03-02T13:15:54.332189178Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 2 13:15:56.008607 containerd[1459]: time="2026-03-02T13:15:56.007660756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:15:56.013630 containerd[1459]: time="2026-03-02T13:15:56.013192190Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162746" Mar 2 13:15:56.016190 containerd[1459]: time="2026-03-02T13:15:56.016126229Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:15:56.030159 containerd[1459]: time="2026-03-02T13:15:56.030037861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:15:56.034678 containerd[1459]: time="2026-03-02T13:15:56.033205796Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.700955408s" Mar 2 13:15:56.034678 containerd[1459]: time="2026-03-02T13:15:56.033290078Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 2 13:15:56.038765 containerd[1459]: time="2026-03-02T13:15:56.035888067Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 2 13:15:59.014767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3776151427.mount: Deactivated successfully. Mar 2 13:16:01.493386 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 2 13:16:01.530314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:16:01.831822 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:16:01.841157 (kubelet)[1930]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:16:01.940975 kubelet[1930]: E0302 13:16:01.940680 1930 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:16:01.945512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:16:01.945994 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:16:02.149981 containerd[1459]: time="2026-03-02T13:16:02.148939402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:16:02.155420 containerd[1459]: time="2026-03-02T13:16:02.154933744Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828647" Mar 2 13:16:02.157386 containerd[1459]: time="2026-03-02T13:16:02.156928405Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:16:02.164308 containerd[1459]: time="2026-03-02T13:16:02.164098334Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:16:02.165968 containerd[1459]: time="2026-03-02T13:16:02.165755043Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 6.129828638s" Mar 2 13:16:02.165968 containerd[1459]: time="2026-03-02T13:16:02.165826557Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 2 13:16:02.172394 containerd[1459]: time="2026-03-02T13:16:02.172311886Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 2 13:16:02.838470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838743367.mount: Deactivated successfully. Mar 2 13:16:04.973091 containerd[1459]: time="2026-03-02T13:16:04.972925817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:16:04.978122 containerd[1459]: time="2026-03-02T13:16:04.977988734Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Mar 2 13:16:04.983679 containerd[1459]: time="2026-03-02T13:16:04.982226017Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:16:04.989328 containerd[1459]: time="2026-03-02T13:16:04.989233212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:16:04.992144 containerd[1459]: time="2026-03-02T13:16:04.991108660Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.81875486s" Mar 2 13:16:04.992144 containerd[1459]: time="2026-03-02T13:16:04.991179576Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 2 13:16:04.994401 containerd[1459]: time="2026-03-02T13:16:04.994235942Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 2 13:16:05.636205 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount877716505.mount: Deactivated successfully. Mar 2 13:16:05.659280 containerd[1459]: time="2026-03-02T13:16:05.657969296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:16:05.661362 containerd[1459]: time="2026-03-02T13:16:05.661204285Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 2 13:16:05.664915 containerd[1459]: time="2026-03-02T13:16:05.664207681Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:16:05.673385 containerd[1459]: time="2026-03-02T13:16:05.670410644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:16:05.673385 containerd[1459]: time="2026-03-02T13:16:05.671706477Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 677.371686ms" Mar 2 13:16:05.673385 containerd[1459]: time="2026-03-02T13:16:05.672903364Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 2 13:16:05.675537 containerd[1459]: time="2026-03-02T13:16:05.675410733Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 2 13:16:06.390140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1794286867.mount: Deactivated successfully. Mar 2 13:16:09.330649 containerd[1459]: time="2026-03-02T13:16:09.329789785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:16:09.332912 containerd[1459]: time="2026-03-02T13:16:09.332848377Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718840" Mar 2 13:16:09.336645 containerd[1459]: time="2026-03-02T13:16:09.336280864Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:16:09.342947 containerd[1459]: time="2026-03-02T13:16:09.341487553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:16:09.345448 containerd[1459]: time="2026-03-02T13:16:09.343931904Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 3.668465549s" Mar 2 13:16:09.345448 containerd[1459]: time="2026-03-02T13:16:09.345225192Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 2 13:16:11.400747 update_engine[1445]: I20260302 13:16:11.400254 1445 update_attempter.cc:509] Updating boot flags... Mar 2 13:16:11.486688 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2090) Mar 2 13:16:11.578638 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2094) Mar 2 13:16:11.653901 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 35 scanned by (udev-worker) (2094) Mar 2 13:16:11.996681 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 2 13:16:12.076030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:16:12.505309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:16:12.509864 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 2 13:16:12.666492 kubelet[2106]: E0302 13:16:12.666395 2106 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 2 13:16:12.672700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 2 13:16:12.673012 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 2 13:16:15.766393 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:16:15.785727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:16:15.839836 systemd[1]: Reloading requested from client PID 2123 ('systemctl') (unit session-9.scope)... Mar 2 13:16:15.839888 systemd[1]: Reloading... Mar 2 13:16:15.982885 zram_generator::config[2162]: No configuration found. Mar 2 13:16:16.189988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:16:16.300225 systemd[1]: Reloading finished in 459 ms. Mar 2 13:16:16.392477 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:16:16.399193 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:16:16.401068 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:16:16.401440 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:16:16.422109 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:16:16.704668 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:16:16.710533 (kubelet)[2212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:16:16.841645 kubelet[2212]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:16:16.842412 kubelet[2212]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 13:16:16.842412 kubelet[2212]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:16:16.844517 kubelet[2212]: I0302 13:16:16.843190 2212 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 13:16:17.405461 kubelet[2212]: I0302 13:16:17.401826 2212 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 2 13:16:17.405461 kubelet[2212]: I0302 13:16:17.403309 2212 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:16:17.405461 kubelet[2212]: I0302 13:16:17.403832 2212 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 13:16:17.496214 kubelet[2212]: E0302 13:16:17.495335 2212 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.96:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:16:17.510638 kubelet[2212]: I0302 13:16:17.510154 2212 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:16:17.527652 kubelet[2212]: E0302 13:16:17.525092 2212 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:16:17.527652 kubelet[2212]: I0302 13:16:17.525283 2212 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 2 13:16:17.544645 kubelet[2212]: I0302 13:16:17.544461 2212 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 2 13:16:17.545330 kubelet[2212]: I0302 13:16:17.545140 2212 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:16:17.546054 kubelet[2212]: I0302 13:16:17.545265 2212 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:16:17.547623 kubelet[2212]: I0302 13:16:17.546064 2212 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 13:16:17.547623 kubelet[2212]: I0302 13:16:17.546081 2212 container_manager_linux.go:303] "Creating device plugin manager" Mar 2 13:16:17.547623 kubelet[2212]: I0302 13:16:17.546327 2212 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:16:17.558054 kubelet[2212]: I0302 13:16:17.557887 2212 kubelet.go:480] "Attempting to sync node with API server" Mar 2 13:16:17.558054 kubelet[2212]: I0302 13:16:17.557972 2212 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:16:17.558054 kubelet[2212]: I0302 13:16:17.558024 2212 kubelet.go:386] "Adding apiserver pod source" Mar 2 13:16:17.562981 kubelet[2212]: I0302 13:16:17.562179 2212 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:16:17.580834 kubelet[2212]: E0302 13:16:17.580511 2212 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:16:17.586629 kubelet[2212]: I0302 13:16:17.584927 2212 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:16:17.587688 kubelet[2212]: I0302 13:16:17.587130 2212 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:16:17.589499 kubelet[2212]: W0302 13:16:17.588262 2212 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 2 13:16:17.598610 kubelet[2212]: E0302 13:16:17.593115 2212 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:16:17.609199 kubelet[2212]: I0302 13:16:17.609069 2212 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 2 13:16:17.610110 kubelet[2212]: I0302 13:16:17.610011 2212 server.go:1289] "Started kubelet" Mar 2 13:16:17.610182 kubelet[2212]: I0302 13:16:17.610125 2212 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:16:17.613733 kubelet[2212]: I0302 13:16:17.610356 2212 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:16:17.613733 kubelet[2212]: I0302 13:16:17.612847 2212 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:16:17.614248 kubelet[2212]: I0302 13:16:17.614227 2212 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:16:17.614364 kubelet[2212]: I0302 13:16:17.614235 2212 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 13:16:17.618241 kubelet[2212]: E0302 13:16:17.616514 2212 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.96:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.96:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1899089813e9ba33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:16:17.609136691 +0000 UTC m=+0.884603454,LastTimestamp:2026-03-02 13:16:17.609136691 +0000 UTC m=+0.884603454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:16:17.620912 kubelet[2212]: I0302 13:16:17.614404 2212 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:16:17.620912 kubelet[2212]: I0302 13:16:17.620202 2212 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 2 13:16:17.621313 kubelet[2212]: I0302 13:16:17.621255 2212 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 2 13:16:17.621364 kubelet[2212]: I0302 13:16:17.621352 2212 reconciler.go:26] "Reconciler: start to sync state" Mar 2 13:16:17.621942 kubelet[2212]: E0302 13:16:17.621882 2212 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:16:17.622304 kubelet[2212]: E0302 13:16:17.622226 2212 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:16:17.622510 kubelet[2212]: E0302 13:16:17.622396 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="200ms" Mar 2 13:16:17.624183 kubelet[2212]: I0302 13:16:17.623701 2212 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:16:17.624475 kubelet[2212]: E0302 13:16:17.624402 2212 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:16:17.625908 kubelet[2212]: I0302 13:16:17.625817 2212 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:16:17.625908 kubelet[2212]: I0302 13:16:17.625871 2212 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:16:17.685747 kubelet[2212]: I0302 13:16:17.683944 2212 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 13:16:17.685747 kubelet[2212]: I0302 13:16:17.683971 2212 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 13:16:17.685747 kubelet[2212]: I0302 13:16:17.683998 2212 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:16:17.723298 kubelet[2212]: E0302 13:16:17.722702 2212 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:16:17.774677 kubelet[2212]: I0302 13:16:17.774472 2212 policy_none.go:49] "None policy: Start" Mar 2 13:16:17.774677 kubelet[2212]: I0302 13:16:17.774668 2212 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 2 13:16:17.774677 kubelet[2212]: I0302 13:16:17.774695 2212 state_mem.go:35] "Initializing new in-memory state store" Mar 2 13:16:17.783223 kubelet[2212]: I0302 13:16:17.782212 2212 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 2 13:16:17.788927 kubelet[2212]: I0302 13:16:17.788856 2212 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 2 13:16:17.789040 kubelet[2212]: I0302 13:16:17.788953 2212 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 2 13:16:17.789040 kubelet[2212]: I0302 13:16:17.788985 2212 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:16:17.789040 kubelet[2212]: I0302 13:16:17.788993 2212 kubelet.go:2436] "Starting kubelet main sync loop" Mar 2 13:16:17.789137 kubelet[2212]: E0302 13:16:17.789040 2212 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:16:17.794285 kubelet[2212]: E0302 13:16:17.793872 2212 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:16:17.796882 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 2 13:16:17.818382 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 2 13:16:17.823282 kubelet[2212]: E0302 13:16:17.823199 2212 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 2 13:16:17.824297 kubelet[2212]: E0302 13:16:17.824266 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="400ms" Mar 2 13:16:17.829168 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 2 13:16:17.843197 kubelet[2212]: E0302 13:16:17.842922 2212 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:16:17.844784 kubelet[2212]: I0302 13:16:17.844122 2212 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 13:16:17.844784 kubelet[2212]: I0302 13:16:17.844604 2212 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:16:17.846853 kubelet[2212]: I0302 13:16:17.846414 2212 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 13:16:17.849708 kubelet[2212]: E0302 13:16:17.849211 2212 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:16:17.850050 kubelet[2212]: E0302 13:16:17.849742 2212 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:16:17.925685 kubelet[2212]: I0302 13:16:17.924305 2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:16:17.925685 kubelet[2212]: I0302 13:16:17.924359 2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:16:17.925685 kubelet[2212]: I0302 13:16:17.924393 2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:16:17.925685 kubelet[2212]: I0302 13:16:17.924426 2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 2 13:16:17.925685 kubelet[2212]: I0302 13:16:17.924878 2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/16858461cde391899503911b654298a4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"16858461cde391899503911b654298a4\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:16:17.926047 kubelet[2212]: I0302 13:16:17.924903 2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:16:17.926047 kubelet[2212]: I0302 13:16:17.924928 2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:16:17.926047 kubelet[2212]: I0302 13:16:17.924953 2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/16858461cde391899503911b654298a4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"16858461cde391899503911b654298a4\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:16:17.926047 kubelet[2212]: I0302 13:16:17.924977 2212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16858461cde391899503911b654298a4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"16858461cde391899503911b654298a4\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:16:17.926241 systemd[1]: Created slice kubepods-burstable-pod16858461cde391899503911b654298a4.slice - libcontainer container kubepods-burstable-pod16858461cde391899503911b654298a4.slice. Mar 2 13:16:17.948396 kubelet[2212]: E0302 13:16:17.948250 2212 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:16:17.953233 kubelet[2212]: I0302 13:16:17.952855 2212 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:16:17.953406 kubelet[2212]: E0302 13:16:17.953354 2212 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Mar 2 13:16:17.960742 systemd[1]: Created slice kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice - libcontainer container kubepods-burstable-pod8747e1f8a49a618fbc1324a8fe2d3754.slice. Mar 2 13:16:17.971995 kubelet[2212]: E0302 13:16:17.971143 2212 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:16:17.984368 systemd[1]: Created slice kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice - libcontainer container kubepods-burstable-pode944e4cb17af904786c3a2e01e298498.slice. Mar 2 13:16:17.990540 kubelet[2212]: E0302 13:16:17.990435 2212 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:16:18.156682 kubelet[2212]: I0302 13:16:18.156432 2212 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:16:18.157877 kubelet[2212]: E0302 13:16:18.157784 2212 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Mar 2 13:16:18.228196 kubelet[2212]: E0302 13:16:18.227032 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="800ms" Mar 2 13:16:18.251876 kubelet[2212]: E0302 13:16:18.250134 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:18.254379 containerd[1459]: time="2026-03-02T13:16:18.253267077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:16858461cde391899503911b654298a4,Namespace:kube-system,Attempt:0,}" Mar 2 13:16:18.273740 kubelet[2212]: E0302 13:16:18.273111 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:18.274371 containerd[1459]: time="2026-03-02T13:16:18.274119241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,}" Mar 2 13:16:18.298171 kubelet[2212]: E0302 13:16:18.295019 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:18.299207 containerd[1459]: time="2026-03-02T13:16:18.299154826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,}" Mar 2 13:16:18.565638 kubelet[2212]: I0302 13:16:18.564739 2212 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:16:18.566209 kubelet[2212]: E0302 13:16:18.565704 2212 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Mar 2 13:16:18.697011 kubelet[2212]: E0302 13:16:18.696807 2212 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.96:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 2 13:16:18.711980 kubelet[2212]: E0302 13:16:18.711680 2212 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.96:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 2 13:16:18.751347 kubelet[2212]: E0302 13:16:18.750434 2212 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.96:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 2 13:16:18.803980 kubelet[2212]: E0302 13:16:18.803720 2212 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.96:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.96:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1899089813e9ba33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:16:17.609136691 +0000 UTC m=+0.884603454,LastTimestamp:2026-03-02 13:16:17.609136691 +0000 UTC m=+0.884603454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:16:18.992897 kubelet[2212]: E0302 13:16:18.990782 2212 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.96:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 2 13:16:19.029065 kubelet[2212]: E0302 13:16:19.028944 2212 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.96:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.96:6443: connect: connection refused" interval="1.6s" Mar 2 13:16:19.348998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount737045859.mount: Deactivated successfully. Mar 2 13:16:19.369367 kubelet[2212]: I0302 13:16:19.369323 2212 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:16:19.370181 kubelet[2212]: E0302 13:16:19.370136 2212 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.96:6443/api/v1/nodes\": dial tcp 10.0.0.96:6443: connect: connection refused" node="localhost" Mar 2 13:16:19.390677 containerd[1459]: time="2026-03-02T13:16:19.387463101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:16:19.400497 containerd[1459]: time="2026-03-02T13:16:19.399706447Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 2 13:16:19.404054 containerd[1459]: time="2026-03-02T13:16:19.403957929Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:16:19.406693 containerd[1459]: time="2026-03-02T13:16:19.406516173Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:16:19.411249 containerd[1459]: time="2026-03-02T13:16:19.411071110Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:16:19.417627 containerd[1459]: time="2026-03-02T13:16:19.417239765Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 2 13:16:19.418176 containerd[1459]: time="2026-03-02T13:16:19.418094196Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:16:19.428045 containerd[1459]: time="2026-03-02T13:16:19.427871726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 2 13:16:19.429880 containerd[1459]: time="2026-03-02T13:16:19.429743371Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.155471647s" Mar 2 13:16:19.431412 containerd[1459]: time="2026-03-02T13:16:19.431335054Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.177900661s" Mar 2 13:16:19.432977 containerd[1459]: time="2026-03-02T13:16:19.432896613Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.131983966s" Mar 2 13:16:19.646949 kubelet[2212]: E0302 13:16:19.646501 2212 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.96:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.96:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 2 13:16:19.747027 containerd[1459]: time="2026-03-02T13:16:19.745652879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:16:19.750618 containerd[1459]: time="2026-03-02T13:16:19.746743229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:16:19.750618 containerd[1459]: time="2026-03-02T13:16:19.746772725Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:16:19.750618 containerd[1459]: time="2026-03-02T13:16:19.746936219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:16:19.751903 containerd[1459]: time="2026-03-02T13:16:19.751632561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:16:19.751903 containerd[1459]: time="2026-03-02T13:16:19.751772072Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:16:19.751903 containerd[1459]: time="2026-03-02T13:16:19.751816320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:16:19.752164 containerd[1459]: time="2026-03-02T13:16:19.751945768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:16:19.753333 containerd[1459]: time="2026-03-02T13:16:19.752365602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:16:19.753333 containerd[1459]: time="2026-03-02T13:16:19.752430898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:16:19.753333 containerd[1459]: time="2026-03-02T13:16:19.752451904Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:16:19.753333 containerd[1459]: time="2026-03-02T13:16:19.752635262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:16:19.801918 systemd[1]: Started cri-containerd-8381b518334924389409c202e6a4343aa99714abd7f877c22716fdfbaf5f3801.scope - libcontainer container 8381b518334924389409c202e6a4343aa99714abd7f877c22716fdfbaf5f3801. Mar 2 13:16:19.811541 systemd[1]: Started cri-containerd-4b65ff09462c2287c4456989dc8ca64095b4f1f3bab42dcc4680be6d83702781.scope - libcontainer container 4b65ff09462c2287c4456989dc8ca64095b4f1f3bab42dcc4680be6d83702781. Mar 2 13:16:19.815425 systemd[1]: Started cri-containerd-e58667f06b6fbc4cb9aba397fcbfff31cb660b46baa287fe14bb87299ed40d7d.scope - libcontainer container e58667f06b6fbc4cb9aba397fcbfff31cb660b46baa287fe14bb87299ed40d7d. Mar 2 13:16:19.926905 containerd[1459]: time="2026-03-02T13:16:19.926726299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8747e1f8a49a618fbc1324a8fe2d3754,Namespace:kube-system,Attempt:0,} returns sandbox id \"8381b518334924389409c202e6a4343aa99714abd7f877c22716fdfbaf5f3801\"" Mar 2 13:16:19.931382 kubelet[2212]: E0302 13:16:19.931166 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:19.938390 containerd[1459]: time="2026-03-02T13:16:19.938234357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:e944e4cb17af904786c3a2e01e298498,Namespace:kube-system,Attempt:0,} returns sandbox id \"e58667f06b6fbc4cb9aba397fcbfff31cb660b46baa287fe14bb87299ed40d7d\"" Mar 2 13:16:19.943864 kubelet[2212]: E0302 13:16:19.943462 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:19.948328 containerd[1459]: time="2026-03-02T13:16:19.948232885Z" level=info msg="CreateContainer within sandbox \"8381b518334924389409c202e6a4343aa99714abd7f877c22716fdfbaf5f3801\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 2 13:16:19.961457 containerd[1459]: time="2026-03-02T13:16:19.960108226Z" level=info msg="CreateContainer within sandbox \"e58667f06b6fbc4cb9aba397fcbfff31cb660b46baa287fe14bb87299ed40d7d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 2 13:16:19.964367 containerd[1459]: time="2026-03-02T13:16:19.963867574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:16858461cde391899503911b654298a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b65ff09462c2287c4456989dc8ca64095b4f1f3bab42dcc4680be6d83702781\"" Mar 2 13:16:19.969422 kubelet[2212]: E0302 13:16:19.969132 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:19.984055 containerd[1459]: time="2026-03-02T13:16:19.982063699Z" level=info msg="CreateContainer within sandbox \"4b65ff09462c2287c4456989dc8ca64095b4f1f3bab42dcc4680be6d83702781\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 2 13:16:20.018305 containerd[1459]: time="2026-03-02T13:16:20.018177673Z" level=info msg="CreateContainer within sandbox \"8381b518334924389409c202e6a4343aa99714abd7f877c22716fdfbaf5f3801\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a4c714b925a55bb2156be8d5ec3e0090c757f6dbdcc478334a8f7ed0714d9a0b\"" Mar 2 13:16:20.020021 containerd[1459]: time="2026-03-02T13:16:20.019861114Z" level=info msg="StartContainer for \"a4c714b925a55bb2156be8d5ec3e0090c757f6dbdcc478334a8f7ed0714d9a0b\"" Mar 2 13:16:20.043218 containerd[1459]: time="2026-03-02T13:16:20.042510195Z" level=info msg="CreateContainer within sandbox \"e58667f06b6fbc4cb9aba397fcbfff31cb660b46baa287fe14bb87299ed40d7d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"241712c8d9f548cfaf4b989f769e28e9ca1fb46688ef2edf5a9191cf610649e5\"" Mar 2 13:16:20.044632 containerd[1459]: time="2026-03-02T13:16:20.043999154Z" level=info msg="StartContainer for \"241712c8d9f548cfaf4b989f769e28e9ca1fb46688ef2edf5a9191cf610649e5\"" Mar 2 13:16:20.059807 containerd[1459]: time="2026-03-02T13:16:20.059765140Z" level=info msg="CreateContainer within sandbox \"4b65ff09462c2287c4456989dc8ca64095b4f1f3bab42dcc4680be6d83702781\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ab270e138bf7c92b763dd0d3320e24f07c71098dd3fa788c2277d106479d435d\"" Mar 2 13:16:20.061676 containerd[1459]: time="2026-03-02T13:16:20.061479319Z" level=info msg="StartContainer for \"ab270e138bf7c92b763dd0d3320e24f07c71098dd3fa788c2277d106479d435d\"" Mar 2 13:16:20.096219 systemd[1]: Started cri-containerd-241712c8d9f548cfaf4b989f769e28e9ca1fb46688ef2edf5a9191cf610649e5.scope - libcontainer container 241712c8d9f548cfaf4b989f769e28e9ca1fb46688ef2edf5a9191cf610649e5. Mar 2 13:16:20.113779 systemd[1]: Started cri-containerd-a4c714b925a55bb2156be8d5ec3e0090c757f6dbdcc478334a8f7ed0714d9a0b.scope - libcontainer container a4c714b925a55bb2156be8d5ec3e0090c757f6dbdcc478334a8f7ed0714d9a0b. Mar 2 13:16:20.130056 systemd[1]: Started cri-containerd-ab270e138bf7c92b763dd0d3320e24f07c71098dd3fa788c2277d106479d435d.scope - libcontainer container ab270e138bf7c92b763dd0d3320e24f07c71098dd3fa788c2277d106479d435d. Mar 2 13:16:20.235754 containerd[1459]: time="2026-03-02T13:16:20.233859174Z" level=info msg="StartContainer for \"241712c8d9f548cfaf4b989f769e28e9ca1fb46688ef2edf5a9191cf610649e5\" returns successfully" Mar 2 13:16:20.256801 containerd[1459]: time="2026-03-02T13:16:20.256740647Z" level=info msg="StartContainer for \"ab270e138bf7c92b763dd0d3320e24f07c71098dd3fa788c2277d106479d435d\" returns successfully" Mar 2 13:16:20.257331 containerd[1459]: time="2026-03-02T13:16:20.256977670Z" level=info msg="StartContainer for \"a4c714b925a55bb2156be8d5ec3e0090c757f6dbdcc478334a8f7ed0714d9a0b\" returns successfully" Mar 2 13:16:20.815124 kubelet[2212]: E0302 13:16:20.814993 2212 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:16:20.815835 kubelet[2212]: E0302 13:16:20.815201 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:20.828098 kubelet[2212]: E0302 13:16:20.827943 2212 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:16:20.828242 kubelet[2212]: E0302 13:16:20.828160 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:20.836452 kubelet[2212]: E0302 13:16:20.836286 2212 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:16:20.836452 kubelet[2212]: E0302 13:16:20.836479 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:20.975497 kubelet[2212]: I0302 13:16:20.975451 2212 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:16:21.883669 kubelet[2212]: E0302 13:16:21.883624 2212 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:16:21.885538 kubelet[2212]: E0302 13:16:21.885487 2212 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:16:21.885914 kubelet[2212]: E0302 13:16:21.885747 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:21.889942 kubelet[2212]: E0302 13:16:21.889311 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:27.453099 kubelet[2212]: E0302 13:16:27.452734 2212 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:16:27.453099 kubelet[2212]: E0302 13:16:27.452922 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:27.757419 kubelet[2212]: E0302 13:16:27.757249 2212 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:16:27.757799 kubelet[2212]: E0302 13:16:27.757636 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:27.845522 kubelet[2212]: E0302 13:16:27.845180 2212 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 2 13:16:27.845522 kubelet[2212]: E0302 13:16:27.845393 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:27.852118 kubelet[2212]: E0302 13:16:27.850638 2212 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 2 13:16:29.962854 kubelet[2212]: E0302 13:16:29.962405 2212 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 2 13:16:30.310693 kubelet[2212]: I0302 13:16:30.310373 2212 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 13:16:30.310693 kubelet[2212]: E0302 13:16:30.310415 2212 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 2 13:16:30.320060 kubelet[2212]: E0302 13:16:30.319761 2212 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1899089813e9ba33 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-03-02 13:16:17.609136691 +0000 UTC m=+0.884603454,LastTimestamp:2026-03-02 13:16:17.609136691 +0000 UTC m=+0.884603454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 2 13:16:30.327187 kubelet[2212]: I0302 13:16:30.324014 2212 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:16:31.119405 kubelet[2212]: E0302 13:16:31.119115 2212 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 2 13:16:31.119405 kubelet[2212]: I0302 13:16:31.119324 2212 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:16:31.122639 kubelet[2212]: I0302 13:16:31.122299 2212 apiserver.go:52] "Watching apiserver" Mar 2 13:16:31.230132 kubelet[2212]: I0302 13:16:31.229007 2212 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 2 13:16:31.252249 kubelet[2212]: I0302 13:16:31.245234 2212 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:16:31.252249 kubelet[2212]: E0302 13:16:31.251197 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:31.307428 kubelet[2212]: E0302 13:16:31.307241 2212 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:34.734728 systemd[1]: Reloading requested from client PID 2497 ('systemctl') (unit session-9.scope)... Mar 2 13:16:34.734780 systemd[1]: Reloading... Mar 2 13:16:34.852747 zram_generator::config[2542]: No configuration found. Mar 2 13:16:35.055177 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 2 13:16:35.200516 systemd[1]: Reloading finished in 465 ms. Mar 2 13:16:35.279529 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:16:35.305855 systemd[1]: kubelet.service: Deactivated successfully. Mar 2 13:16:35.306319 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:16:35.306482 systemd[1]: kubelet.service: Consumed 5.371s CPU time, 136.9M memory peak, 0B memory swap peak. Mar 2 13:16:35.333842 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 2 13:16:35.613939 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 2 13:16:35.616439 (kubelet)[2581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 2 13:16:35.737689 kubelet[2581]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:16:35.737689 kubelet[2581]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 2 13:16:35.737689 kubelet[2581]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 2 13:16:35.737689 kubelet[2581]: I0302 13:16:35.736794 2581 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 2 13:16:35.757725 kubelet[2581]: I0302 13:16:35.755143 2581 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 2 13:16:35.757725 kubelet[2581]: I0302 13:16:35.755181 2581 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 2 13:16:35.757725 kubelet[2581]: I0302 13:16:35.755641 2581 server.go:956] "Client rotation is on, will bootstrap in background" Mar 2 13:16:35.757725 kubelet[2581]: I0302 13:16:35.757478 2581 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 2 13:16:35.764668 kubelet[2581]: I0302 13:16:35.764356 2581 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 2 13:16:35.775870 kubelet[2581]: E0302 13:16:35.775746 2581 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 2 13:16:35.775870 kubelet[2581]: I0302 13:16:35.775832 2581 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 2 13:16:35.784870 kubelet[2581]: I0302 13:16:35.784781 2581 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 2 13:16:35.785490 kubelet[2581]: I0302 13:16:35.785229 2581 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 2 13:16:35.785490 kubelet[2581]: I0302 13:16:35.785276 2581 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 2 13:16:35.785904 kubelet[2581]: I0302 13:16:35.785504 2581 topology_manager.go:138] "Creating topology manager with none policy" Mar 2 13:16:35.785904 kubelet[2581]: I0302 13:16:35.785624 2581 container_manager_linux.go:303] "Creating device plugin manager" Mar 2 13:16:35.785904 kubelet[2581]: I0302 13:16:35.785711 2581 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:16:35.786021 kubelet[2581]: I0302 13:16:35.785996 2581 kubelet.go:480] "Attempting to sync node with API server" Mar 2 13:16:35.786060 kubelet[2581]: I0302 13:16:35.786025 2581 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 2 13:16:35.786096 kubelet[2581]: I0302 13:16:35.786069 2581 kubelet.go:386] "Adding apiserver pod source" Mar 2 13:16:35.786125 kubelet[2581]: I0302 13:16:35.786095 2581 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 2 13:16:35.788934 kubelet[2581]: I0302 13:16:35.788391 2581 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 2 13:16:35.789800 kubelet[2581]: I0302 13:16:35.789448 2581 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 2 13:16:35.815185 kubelet[2581]: I0302 13:16:35.814875 2581 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 2 13:16:35.815185 kubelet[2581]: I0302 13:16:35.814936 2581 server.go:1289] "Started kubelet" Mar 2 13:16:35.816991 kubelet[2581]: I0302 13:16:35.816911 2581 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 2 13:16:35.820650 kubelet[2581]: I0302 13:16:35.819035 2581 server.go:317] "Adding debug handlers to kubelet server" Mar 2 13:16:35.820650 kubelet[2581]: I0302 13:16:35.816109 2581 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 2 13:16:35.820650 kubelet[2581]: I0302 13:16:35.819834 2581 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 2 13:16:35.835421 kubelet[2581]: I0302 13:16:35.835279 2581 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 2 13:16:35.846687 kubelet[2581]: I0302 13:16:35.841233 2581 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 2 13:16:35.846687 kubelet[2581]: I0302 13:16:35.841899 2581 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 2 13:16:35.846687 kubelet[2581]: I0302 13:16:35.842078 2581 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 2 13:16:35.846687 kubelet[2581]: I0302 13:16:35.842282 2581 reconciler.go:26] "Reconciler: start to sync state" Mar 2 13:16:35.851773 kubelet[2581]: I0302 13:16:35.849921 2581 factory.go:223] Registration of the systemd container factory successfully Mar 2 13:16:35.851773 kubelet[2581]: I0302 13:16:35.850975 2581 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 2 13:16:35.871466 kubelet[2581]: I0302 13:16:35.869670 2581 factory.go:223] Registration of the containerd container factory successfully Mar 2 13:16:35.879204 kubelet[2581]: E0302 13:16:35.873869 2581 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 2 13:16:35.909078 sudo[2609]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Mar 2 13:16:35.909904 sudo[2609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Mar 2 13:16:35.909988 kubelet[2581]: I0302 13:16:35.909394 2581 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 2 13:16:35.917228 kubelet[2581]: I0302 13:16:35.917068 2581 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 2 13:16:35.917228 kubelet[2581]: I0302 13:16:35.917143 2581 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 2 13:16:35.917228 kubelet[2581]: I0302 13:16:35.917170 2581 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 2 13:16:35.917228 kubelet[2581]: I0302 13:16:35.917180 2581 kubelet.go:2436] "Starting kubelet main sync loop" Mar 2 13:16:35.917489 kubelet[2581]: E0302 13:16:35.917240 2581 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 2 13:16:35.981221 kubelet[2581]: I0302 13:16:35.981015 2581 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 2 13:16:35.981221 kubelet[2581]: I0302 13:16:35.981082 2581 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 2 13:16:35.981221 kubelet[2581]: I0302 13:16:35.981114 2581 state_mem.go:36] "Initialized new in-memory state store" Mar 2 13:16:35.981499 kubelet[2581]: I0302 13:16:35.981424 2581 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 2 13:16:35.981499 kubelet[2581]: I0302 13:16:35.981441 2581 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 2 13:16:35.981499 kubelet[2581]: I0302 13:16:35.981468 2581 policy_none.go:49] "None policy: Start" Mar 2 13:16:35.981499 kubelet[2581]: I0302 13:16:35.981487 2581 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 2 13:16:35.981747 kubelet[2581]: I0302 13:16:35.981504 2581 state_mem.go:35] "Initializing new in-memory state store" Mar 2 13:16:35.981789 kubelet[2581]: I0302 13:16:35.981767 2581 state_mem.go:75] "Updated machine memory state" Mar 2 13:16:35.999701 kubelet[2581]: E0302 13:16:35.999536 2581 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 2 13:16:35.999996 kubelet[2581]: I0302 13:16:35.999941 2581 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 2 13:16:36.000072 kubelet[2581]: I0302 13:16:35.999993 2581 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 2 13:16:36.000358 kubelet[2581]: I0302 13:16:36.000301 2581 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 2 13:16:36.004944 kubelet[2581]: E0302 13:16:36.004762 2581 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 2 13:16:36.019299 kubelet[2581]: I0302 13:16:36.019095 2581 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 2 13:16:36.020088 kubelet[2581]: I0302 13:16:36.019958 2581 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 2 13:16:36.021716 kubelet[2581]: I0302 13:16:36.021538 2581 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:16:36.044800 kubelet[2581]: I0302 13:16:36.044730 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e944e4cb17af904786c3a2e01e298498-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"e944e4cb17af904786c3a2e01e298498\") " pod="kube-system/kube-scheduler-localhost" Mar 2 13:16:36.044800 kubelet[2581]: I0302 13:16:36.044787 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/16858461cde391899503911b654298a4-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"16858461cde391899503911b654298a4\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:16:36.044800 kubelet[2581]: I0302 13:16:36.044819 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/16858461cde391899503911b654298a4-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"16858461cde391899503911b654298a4\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:16:36.044800 kubelet[2581]: I0302 13:16:36.044847 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/16858461cde391899503911b654298a4-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"16858461cde391899503911b654298a4\") " pod="kube-system/kube-apiserver-localhost" Mar 2 13:16:36.044800 kubelet[2581]: I0302 13:16:36.044876 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:16:36.045517 kubelet[2581]: I0302 13:16:36.044909 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:16:36.045517 kubelet[2581]: I0302 13:16:36.044936 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:16:36.045517 kubelet[2581]: I0302 13:16:36.045006 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:16:36.045517 kubelet[2581]: I0302 13:16:36.045032 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8747e1f8a49a618fbc1324a8fe2d3754-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8747e1f8a49a618fbc1324a8fe2d3754\") " pod="kube-system/kube-controller-manager-localhost" Mar 2 13:16:36.059282 kubelet[2581]: E0302 13:16:36.058843 2581 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 2 13:16:36.059282 kubelet[2581]: E0302 13:16:36.059177 2581 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 2 13:16:36.113020 kubelet[2581]: I0302 13:16:36.112859 2581 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Mar 2 13:16:36.153292 kubelet[2581]: I0302 13:16:36.152974 2581 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Mar 2 13:16:36.153292 kubelet[2581]: I0302 13:16:36.153057 2581 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Mar 2 13:16:36.362501 kubelet[2581]: E0302 13:16:36.362374 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:36.362767 kubelet[2581]: E0302 13:16:36.362402 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:36.364178 kubelet[2581]: E0302 13:16:36.364104 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:36.737508 sudo[2609]: pam_unix(sudo:session): session closed for user root Mar 2 13:16:36.789639 kubelet[2581]: I0302 13:16:36.789055 2581 apiserver.go:52] "Watching apiserver" Mar 2 13:16:36.843472 kubelet[2581]: I0302 13:16:36.843247 2581 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 2 13:16:36.955229 kubelet[2581]: E0302 13:16:36.955103 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:36.957041 kubelet[2581]: E0302 13:16:36.956037 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:36.957041 kubelet[2581]: E0302 13:16:36.956806 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:37.048432 kubelet[2581]: I0302 13:16:37.048293 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.048271818 podStartE2EDuration="6.048271818s" podCreationTimestamp="2026-03-02 13:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:16:37.04822423 +0000 UTC m=+1.408502079" watchObservedRunningTime="2026-03-02 13:16:37.048271818 +0000 UTC m=+1.408549677" Mar 2 13:16:37.093755 kubelet[2581]: I0302 13:16:37.093523 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.093497167 podStartE2EDuration="1.093497167s" podCreationTimestamp="2026-03-02 13:16:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:16:37.090721403 +0000 UTC m=+1.450999251" watchObservedRunningTime="2026-03-02 13:16:37.093497167 +0000 UTC m=+1.453775026" Mar 2 13:16:37.130272 kubelet[2581]: I0302 13:16:37.130056 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.128395482 podStartE2EDuration="6.128395482s" podCreationTimestamp="2026-03-02 13:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:16:37.110376021 +0000 UTC m=+1.470653930" watchObservedRunningTime="2026-03-02 13:16:37.128395482 +0000 UTC m=+1.488673331" Mar 2 13:16:37.958743 kubelet[2581]: E0302 13:16:37.958496 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:37.960106 kubelet[2581]: E0302 13:16:37.960073 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:38.488122 sudo[1653]: pam_unix(sudo:session): session closed for user root Mar 2 13:16:38.491132 sshd[1648]: pam_unix(sshd:session): session closed for user core Mar 2 13:16:38.496896 systemd[1]: sshd@8-10.0.0.96:22-10.0.0.1:36740.service: Deactivated successfully. Mar 2 13:16:38.501024 systemd[1]: session-9.scope: Deactivated successfully. Mar 2 13:16:38.501335 systemd[1]: session-9.scope: Consumed 11.492s CPU time, 164.9M memory peak, 0B memory swap peak. Mar 2 13:16:38.502435 systemd-logind[1441]: Session 9 logged out. Waiting for processes to exit. Mar 2 13:16:38.504732 systemd-logind[1441]: Removed session 9. Mar 2 13:16:39.661655 kubelet[2581]: I0302 13:16:39.661244 2581 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 2 13:16:39.663856 kubelet[2581]: I0302 13:16:39.662867 2581 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 2 13:16:39.663917 containerd[1459]: time="2026-03-02T13:16:39.662410588Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 2 13:16:40.302174 kubelet[2581]: E0302 13:16:40.302021 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:40.606789 systemd[1]: Created slice kubepods-besteffort-pod43041111_dc76_480d_af45_2bde3c61d80d.slice - libcontainer container kubepods-besteffort-pod43041111_dc76_480d_af45_2bde3c61d80d.slice. Mar 2 13:16:40.630862 systemd[1]: Created slice kubepods-burstable-poda7bee1b3_a132_4cc0_a032_d16919f8e65b.slice - libcontainer container kubepods-burstable-poda7bee1b3_a132_4cc0_a032_d16919f8e65b.slice. Mar 2 13:16:40.651643 kubelet[2581]: E0302 13:16:40.651132 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:40.701358 kubelet[2581]: I0302 13:16:40.699501 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-cilium-run\") pod \"cilium-qjknz\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " pod="kube-system/cilium-qjknz" Mar 2 13:16:40.701358 kubelet[2581]: I0302 13:16:40.699645 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a7bee1b3-a132-4cc0-a032-d16919f8e65b-cilium-config-path\") pod \"cilium-qjknz\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " pod="kube-system/cilium-qjknz" Mar 2 13:16:40.701358 kubelet[2581]: I0302 13:16:40.699683 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a7bee1b3-a132-4cc0-a032-d16919f8e65b-hubble-tls\") pod \"cilium-qjknz\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " pod="kube-system/cilium-qjknz" Mar 2 13:16:40.701358 kubelet[2581]: I0302 13:16:40.699706 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnqnd\" (UniqueName: \"kubernetes.io/projected/a7bee1b3-a132-4cc0-a032-d16919f8e65b-kube-api-access-pnqnd\") pod \"cilium-qjknz\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " pod="kube-system/cilium-qjknz" Mar 2 13:16:40.701358 kubelet[2581]: I0302 13:16:40.699733 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43041111-dc76-480d-af45-2bde3c61d80d-lib-modules\") pod \"kube-proxy-vrhwl\" (UID: \"43041111-dc76-480d-af45-2bde3c61d80d\") " pod="kube-system/kube-proxy-vrhwl" Mar 2 13:16:40.701358 kubelet[2581]: I0302 13:16:40.699754 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-cilium-cgroup\") pod \"cilium-qjknz\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " pod="kube-system/cilium-qjknz" Mar 2 13:16:40.702277 kubelet[2581]: I0302 13:16:40.699775 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-lib-modules\") pod \"cilium-qjknz\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " pod="kube-system/cilium-qjknz" Mar 2 13:16:40.702277 kubelet[2581]: I0302 13:16:40.699801 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a7bee1b3-a132-4cc0-a032-d16919f8e65b-clustermesh-secrets\") pod \"cilium-qjknz\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " pod="kube-system/cilium-qjknz" Mar 2 13:16:40.702277 kubelet[2581]: I0302 13:16:40.699829 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/43041111-dc76-480d-af45-2bde3c61d80d-kube-proxy\") pod \"kube-proxy-vrhwl\" (UID: \"43041111-dc76-480d-af45-2bde3c61d80d\") " pod="kube-system/kube-proxy-vrhwl" Mar 2 13:16:40.702277 kubelet[2581]: I0302 13:16:40.699852 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43041111-dc76-480d-af45-2bde3c61d80d-xtables-lock\") pod \"kube-proxy-vrhwl\" (UID: \"43041111-dc76-480d-af45-2bde3c61d80d\") " pod="kube-system/kube-proxy-vrhwl" Mar 2 13:16:40.702277 kubelet[2581]: I0302 13:16:40.699875 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-bpf-maps\") pod \"cilium-qjknz\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " pod="kube-system/cilium-qjknz" Mar 2 13:16:40.702277 kubelet[2581]: I0302 13:16:40.699901 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-cni-path\") pod \"cilium-qjknz\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " pod="kube-system/cilium-qjknz" Mar 2 13:16:40.702415 kubelet[2581]: I0302 13:16:40.699925 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-etc-cni-netd\") pod \"cilium-qjknz\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " pod="kube-system/cilium-qjknz" Mar 2 13:16:40.702415 kubelet[2581]: I0302 13:16:40.699955 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-hostproc\") pod \"cilium-qjknz\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " pod="kube-system/cilium-qjknz" Mar 2 13:16:40.702415 kubelet[2581]: I0302 13:16:40.699980 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-xtables-lock\") pod \"cilium-qjknz\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " pod="kube-system/cilium-qjknz" Mar 2 13:16:40.702415 kubelet[2581]: I0302 13:16:40.700013 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-host-proc-sys-net\") pod \"cilium-qjknz\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " pod="kube-system/cilium-qjknz" Mar 2 13:16:40.702415 kubelet[2581]: I0302 13:16:40.700036 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-host-proc-sys-kernel\") pod \"cilium-qjknz\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " pod="kube-system/cilium-qjknz" Mar 2 13:16:40.702662 kubelet[2581]: I0302 13:16:40.700065 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pbx9\" (UniqueName: \"kubernetes.io/projected/43041111-dc76-480d-af45-2bde3c61d80d-kube-api-access-5pbx9\") pod \"kube-proxy-vrhwl\" (UID: \"43041111-dc76-480d-af45-2bde3c61d80d\") " pod="kube-system/kube-proxy-vrhwl" Mar 2 13:16:40.884733 systemd[1]: Created slice kubepods-besteffort-pod6b9e107c_328c_4cd0_ab60_66ddbd87ee7f.slice - libcontainer container kubepods-besteffort-pod6b9e107c_328c_4cd0_ab60_66ddbd87ee7f.slice. Mar 2 13:16:40.907038 kubelet[2581]: I0302 13:16:40.906830 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b9e107c-328c-4cd0-ab60-66ddbd87ee7f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-ff4tl\" (UID: \"6b9e107c-328c-4cd0-ab60-66ddbd87ee7f\") " pod="kube-system/cilium-operator-6c4d7847fc-ff4tl" Mar 2 13:16:40.907038 kubelet[2581]: I0302 13:16:40.906963 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49lq9\" (UniqueName: \"kubernetes.io/projected/6b9e107c-328c-4cd0-ab60-66ddbd87ee7f-kube-api-access-49lq9\") pod \"cilium-operator-6c4d7847fc-ff4tl\" (UID: \"6b9e107c-328c-4cd0-ab60-66ddbd87ee7f\") " pod="kube-system/cilium-operator-6c4d7847fc-ff4tl" Mar 2 13:16:40.922089 kubelet[2581]: E0302 13:16:40.921995 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:40.927478 containerd[1459]: time="2026-03-02T13:16:40.927332460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vrhwl,Uid:43041111-dc76-480d-af45-2bde3c61d80d,Namespace:kube-system,Attempt:0,}" Mar 2 13:16:40.941312 kubelet[2581]: E0302 13:16:40.940733 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:40.943857 containerd[1459]: time="2026-03-02T13:16:40.943695093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qjknz,Uid:a7bee1b3-a132-4cc0-a032-d16919f8e65b,Namespace:kube-system,Attempt:0,}" Mar 2 13:16:40.972053 kubelet[2581]: E0302 13:16:40.972019 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:40.977191 kubelet[2581]: E0302 13:16:40.973793 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:41.074205 containerd[1459]: time="2026-03-02T13:16:41.073147590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:16:41.074205 containerd[1459]: time="2026-03-02T13:16:41.073737061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:16:41.074205 containerd[1459]: time="2026-03-02T13:16:41.073790159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:16:41.079209 containerd[1459]: time="2026-03-02T13:16:41.078036564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:16:41.079209 containerd[1459]: time="2026-03-02T13:16:41.078143873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:16:41.079209 containerd[1459]: time="2026-03-02T13:16:41.078165557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:16:41.079459 containerd[1459]: time="2026-03-02T13:16:41.079181680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:16:41.079459 containerd[1459]: time="2026-03-02T13:16:41.079117810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:16:41.118290 systemd[1]: Started cri-containerd-9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea.scope - libcontainer container 9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea. Mar 2 13:16:41.131315 systemd[1]: Started cri-containerd-e5b1c95dbc03199f27765541108bede2ab794080c81daab7abaf943b56006d58.scope - libcontainer container e5b1c95dbc03199f27765541108bede2ab794080c81daab7abaf943b56006d58. Mar 2 13:16:41.179777 containerd[1459]: time="2026-03-02T13:16:41.178705314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qjknz,Uid:a7bee1b3-a132-4cc0-a032-d16919f8e65b,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\"" Mar 2 13:16:41.191644 kubelet[2581]: E0302 13:16:41.187006 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:41.192172 kubelet[2581]: E0302 13:16:41.192066 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:41.192989 containerd[1459]: time="2026-03-02T13:16:41.192871554Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Mar 2 13:16:41.195519 containerd[1459]: time="2026-03-02T13:16:41.195424809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ff4tl,Uid:6b9e107c-328c-4cd0-ab60-66ddbd87ee7f,Namespace:kube-system,Attempt:0,}" Mar 2 13:16:41.206855 containerd[1459]: time="2026-03-02T13:16:41.203941276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vrhwl,Uid:43041111-dc76-480d-af45-2bde3c61d80d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e5b1c95dbc03199f27765541108bede2ab794080c81daab7abaf943b56006d58\"" Mar 2 13:16:41.207042 kubelet[2581]: E0302 13:16:41.205347 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:41.233165 containerd[1459]: time="2026-03-02T13:16:41.233046534Z" level=info msg="CreateContainer within sandbox \"e5b1c95dbc03199f27765541108bede2ab794080c81daab7abaf943b56006d58\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 2 13:16:41.299401 containerd[1459]: time="2026-03-02T13:16:41.299081450Z" level=info msg="CreateContainer within sandbox \"e5b1c95dbc03199f27765541108bede2ab794080c81daab7abaf943b56006d58\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f8dc15d43c4d07d42b21e7d949c82788b450ad28909754f05cdf0c0fa1d7b456\"" Mar 2 13:16:41.302127 containerd[1459]: time="2026-03-02T13:16:41.301521112Z" level=info msg="StartContainer for \"f8dc15d43c4d07d42b21e7d949c82788b450ad28909754f05cdf0c0fa1d7b456\"" Mar 2 13:16:41.336801 containerd[1459]: time="2026-03-02T13:16:41.334027806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:16:41.338844 containerd[1459]: time="2026-03-02T13:16:41.338707345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:16:41.338844 containerd[1459]: time="2026-03-02T13:16:41.338770724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:16:41.339141 containerd[1459]: time="2026-03-02T13:16:41.338880979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:16:41.377306 systemd[1]: Started cri-containerd-f8dc15d43c4d07d42b21e7d949c82788b450ad28909754f05cdf0c0fa1d7b456.scope - libcontainer container f8dc15d43c4d07d42b21e7d949c82788b450ad28909754f05cdf0c0fa1d7b456. Mar 2 13:16:41.386141 systemd[1]: Started cri-containerd-84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a.scope - libcontainer container 84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a. Mar 2 13:16:41.463709 containerd[1459]: time="2026-03-02T13:16:41.463190323Z" level=info msg="StartContainer for \"f8dc15d43c4d07d42b21e7d949c82788b450ad28909754f05cdf0c0fa1d7b456\" returns successfully" Mar 2 13:16:41.476722 containerd[1459]: time="2026-03-02T13:16:41.475747839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-ff4tl,Uid:6b9e107c-328c-4cd0-ab60-66ddbd87ee7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a\"" Mar 2 13:16:41.478735 kubelet[2581]: E0302 13:16:41.478502 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:41.986007 kubelet[2581]: E0302 13:16:41.985877 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:41.986007 kubelet[2581]: E0302 13:16:41.985966 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:42.011296 kubelet[2581]: I0302 13:16:42.011169 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vrhwl" podStartSLOduration=2.011149194 podStartE2EDuration="2.011149194s" podCreationTimestamp="2026-03-02 13:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:16:42.008629467 +0000 UTC m=+6.368907326" watchObservedRunningTime="2026-03-02 13:16:42.011149194 +0000 UTC m=+6.371427043" Mar 2 13:16:45.266010 kubelet[2581]: E0302 13:16:45.265688 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:46.000489 kubelet[2581]: E0302 13:16:46.000439 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:16:54.537512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204748518.mount: Deactivated successfully. Mar 2 13:17:02.239419 containerd[1459]: time="2026-03-02T13:17:02.239148897Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:17:02.244520 containerd[1459]: time="2026-03-02T13:17:02.244423026Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Mar 2 13:17:02.251258 containerd[1459]: time="2026-03-02T13:17:02.250428787Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:17:02.257667 containerd[1459]: time="2026-03-02T13:17:02.256155958Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 21.063185242s" Mar 2 13:17:02.257667 containerd[1459]: time="2026-03-02T13:17:02.256614192Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Mar 2 13:17:02.270938 containerd[1459]: time="2026-03-02T13:17:02.269356783Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Mar 2 13:17:02.333178 containerd[1459]: time="2026-03-02T13:17:02.333089415Z" level=info msg="CreateContainer within sandbox \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 2 13:17:02.400958 containerd[1459]: time="2026-03-02T13:17:02.400875619Z" level=info msg="CreateContainer within sandbox \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28\"" Mar 2 13:17:02.406899 containerd[1459]: time="2026-03-02T13:17:02.406780496Z" level=info msg="StartContainer for \"7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28\"" Mar 2 13:17:02.457018 systemd[1]: run-containerd-runc-k8s.io-7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28-runc.ZQyEeP.mount: Deactivated successfully. Mar 2 13:17:02.473124 systemd[1]: Started cri-containerd-7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28.scope - libcontainer container 7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28. Mar 2 13:17:02.570223 containerd[1459]: time="2026-03-02T13:17:02.569810011Z" level=info msg="StartContainer for \"7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28\" returns successfully" Mar 2 13:17:02.572022 systemd[1]: cri-containerd-7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28.scope: Deactivated successfully. Mar 2 13:17:02.789225 containerd[1459]: time="2026-03-02T13:17:02.788949823Z" level=info msg="shim disconnected" id=7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28 namespace=k8s.io Mar 2 13:17:02.789225 containerd[1459]: time="2026-03-02T13:17:02.789197919Z" level=warning msg="cleaning up after shim disconnected" id=7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28 namespace=k8s.io Mar 2 13:17:02.789225 containerd[1459]: time="2026-03-02T13:17:02.789211867Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:17:03.128861 kubelet[2581]: E0302 13:17:03.128805 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:03.156022 containerd[1459]: time="2026-03-02T13:17:03.155933390Z" level=info msg="CreateContainer within sandbox \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 2 13:17:03.285311 containerd[1459]: time="2026-03-02T13:17:03.285110401Z" level=info msg="CreateContainer within sandbox \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf\"" Mar 2 13:17:03.291048 containerd[1459]: time="2026-03-02T13:17:03.290834148Z" level=info msg="StartContainer for \"77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf\"" Mar 2 13:17:03.375144 systemd[1]: Started cri-containerd-77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf.scope - libcontainer container 77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf. Mar 2 13:17:03.385782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28-rootfs.mount: Deactivated successfully. Mar 2 13:17:03.435163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount930881998.mount: Deactivated successfully. Mar 2 13:17:03.468479 containerd[1459]: time="2026-03-02T13:17:03.468307638Z" level=info msg="StartContainer for \"77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf\" returns successfully" Mar 2 13:17:03.517141 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 2 13:17:03.520116 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:17:03.520287 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:17:03.532392 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 2 13:17:03.537530 systemd[1]: cri-containerd-77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf.scope: Deactivated successfully. Mar 2 13:17:03.612356 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 2 13:17:03.633332 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf-rootfs.mount: Deactivated successfully. Mar 2 13:17:03.664432 containerd[1459]: time="2026-03-02T13:17:03.661480538Z" level=info msg="shim disconnected" id=77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf namespace=k8s.io Mar 2 13:17:03.664432 containerd[1459]: time="2026-03-02T13:17:03.661635732Z" level=warning msg="cleaning up after shim disconnected" id=77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf namespace=k8s.io Mar 2 13:17:03.664432 containerd[1459]: time="2026-03-02T13:17:03.661655898Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:17:04.166484 kubelet[2581]: E0302 13:17:04.164400 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:04.195431 containerd[1459]: time="2026-03-02T13:17:04.191907037Z" level=info msg="CreateContainer within sandbox \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 2 13:17:04.373136 containerd[1459]: time="2026-03-02T13:17:04.367516496Z" level=info msg="CreateContainer within sandbox \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7\"" Mar 2 13:17:04.378985 containerd[1459]: time="2026-03-02T13:17:04.376061010Z" level=info msg="StartContainer for \"cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7\"" Mar 2 13:17:04.380445 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4133903149.mount: Deactivated successfully. Mar 2 13:17:04.487136 systemd[1]: Started cri-containerd-cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7.scope - libcontainer container cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7. Mar 2 13:17:04.687005 systemd[1]: cri-containerd-cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7.scope: Deactivated successfully. Mar 2 13:17:04.689079 containerd[1459]: time="2026-03-02T13:17:04.688872227Z" level=info msg="StartContainer for \"cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7\" returns successfully" Mar 2 13:17:04.744909 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7-rootfs.mount: Deactivated successfully. Mar 2 13:17:04.777332 containerd[1459]: time="2026-03-02T13:17:04.777178256Z" level=info msg="shim disconnected" id=cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7 namespace=k8s.io Mar 2 13:17:04.777332 containerd[1459]: time="2026-03-02T13:17:04.777252905Z" level=warning msg="cleaning up after shim disconnected" id=cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7 namespace=k8s.io Mar 2 13:17:04.777332 containerd[1459]: time="2026-03-02T13:17:04.777265947Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:17:05.181748 kubelet[2581]: E0302 13:17:05.181397 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:05.215156 containerd[1459]: time="2026-03-02T13:17:05.213724586Z" level=info msg="CreateContainer within sandbox \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 2 13:17:05.283279 containerd[1459]: time="2026-03-02T13:17:05.283113008Z" level=info msg="CreateContainer within sandbox \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061\"" Mar 2 13:17:05.287634 containerd[1459]: time="2026-03-02T13:17:05.286157857Z" level=info msg="StartContainer for \"5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061\"" Mar 2 13:17:05.363759 systemd[1]: Started cri-containerd-5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061.scope - libcontainer container 5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061. Mar 2 13:17:05.477071 systemd[1]: cri-containerd-5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061.scope: Deactivated successfully. Mar 2 13:17:05.497354 containerd[1459]: time="2026-03-02T13:17:05.496924241Z" level=info msg="StartContainer for \"5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061\" returns successfully" Mar 2 13:17:05.571663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061-rootfs.mount: Deactivated successfully. Mar 2 13:17:05.650717 containerd[1459]: time="2026-03-02T13:17:05.648909458Z" level=info msg="shim disconnected" id=5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061 namespace=k8s.io Mar 2 13:17:05.650717 containerd[1459]: time="2026-03-02T13:17:05.648973909Z" level=warning msg="cleaning up after shim disconnected" id=5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061 namespace=k8s.io Mar 2 13:17:05.650717 containerd[1459]: time="2026-03-02T13:17:05.648985598Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:17:06.128672 containerd[1459]: time="2026-03-02T13:17:06.125390062Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:17:06.135213 containerd[1459]: time="2026-03-02T13:17:06.134429231Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Mar 2 13:17:06.148657 containerd[1459]: time="2026-03-02T13:17:06.148298189Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 2 13:17:06.150052 containerd[1459]: time="2026-03-02T13:17:06.149967204Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 3.880566677s" Mar 2 13:17:06.150052 containerd[1459]: time="2026-03-02T13:17:06.150018012Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Mar 2 13:17:06.184773 containerd[1459]: time="2026-03-02T13:17:06.182716556Z" level=info msg="CreateContainer within sandbox \"84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Mar 2 13:17:06.217532 kubelet[2581]: E0302 13:17:06.215992 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:06.236129 containerd[1459]: time="2026-03-02T13:17:06.235863683Z" level=info msg="CreateContainer within sandbox \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 2 13:17:06.289683 containerd[1459]: time="2026-03-02T13:17:06.287106911Z" level=info msg="CreateContainer within sandbox \"84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f\"" Mar 2 13:17:06.290634 containerd[1459]: time="2026-03-02T13:17:06.290293482Z" level=info msg="StartContainer for \"b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f\"" Mar 2 13:17:06.331290 containerd[1459]: time="2026-03-02T13:17:06.330476415Z" level=info msg="CreateContainer within sandbox \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3\"" Mar 2 13:17:06.339068 containerd[1459]: time="2026-03-02T13:17:06.338667036Z" level=info msg="StartContainer for \"956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3\"" Mar 2 13:17:06.396288 systemd[1]: Started cri-containerd-b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f.scope - libcontainer container b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f. Mar 2 13:17:06.449926 systemd[1]: Started cri-containerd-956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3.scope - libcontainer container 956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3. Mar 2 13:17:06.524083 containerd[1459]: time="2026-03-02T13:17:06.523975691Z" level=info msg="StartContainer for \"b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f\" returns successfully" Mar 2 13:17:06.546749 containerd[1459]: time="2026-03-02T13:17:06.546655060Z" level=info msg="StartContainer for \"956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3\" returns successfully" Mar 2 13:17:06.871430 kubelet[2581]: I0302 13:17:06.871286 2581 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 2 13:17:07.020721 systemd[1]: Created slice kubepods-burstable-podc698ef30_ec09_4f0d_8ed2_4afd7a733b20.slice - libcontainer container kubepods-burstable-podc698ef30_ec09_4f0d_8ed2_4afd7a733b20.slice. Mar 2 13:17:07.039034 systemd[1]: Created slice kubepods-burstable-pod05c4a12e_b13f_425a_bd8a_9f6f0bfeb382.slice - libcontainer container kubepods-burstable-pod05c4a12e_b13f_425a_bd8a_9f6f0bfeb382.slice. Mar 2 13:17:07.066307 kubelet[2581]: I0302 13:17:07.065981 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bccv9\" (UniqueName: \"kubernetes.io/projected/05c4a12e-b13f-425a-bd8a-9f6f0bfeb382-kube-api-access-bccv9\") pod \"coredns-674b8bbfcf-n9kc4\" (UID: \"05c4a12e-b13f-425a-bd8a-9f6f0bfeb382\") " pod="kube-system/coredns-674b8bbfcf-n9kc4" Mar 2 13:17:07.066307 kubelet[2581]: I0302 13:17:07.066050 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05c4a12e-b13f-425a-bd8a-9f6f0bfeb382-config-volume\") pod \"coredns-674b8bbfcf-n9kc4\" (UID: \"05c4a12e-b13f-425a-bd8a-9f6f0bfeb382\") " pod="kube-system/coredns-674b8bbfcf-n9kc4" Mar 2 13:17:07.066307 kubelet[2581]: I0302 13:17:07.066075 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c698ef30-ec09-4f0d-8ed2-4afd7a733b20-config-volume\") pod \"coredns-674b8bbfcf-s7qvj\" (UID: \"c698ef30-ec09-4f0d-8ed2-4afd7a733b20\") " pod="kube-system/coredns-674b8bbfcf-s7qvj" Mar 2 13:17:07.066307 kubelet[2581]: I0302 13:17:07.066097 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9vsl\" (UniqueName: \"kubernetes.io/projected/c698ef30-ec09-4f0d-8ed2-4afd7a733b20-kube-api-access-s9vsl\") pod \"coredns-674b8bbfcf-s7qvj\" (UID: \"c698ef30-ec09-4f0d-8ed2-4afd7a733b20\") " pod="kube-system/coredns-674b8bbfcf-s7qvj" Mar 2 13:17:07.241779 kubelet[2581]: E0302 13:17:07.239166 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:07.255062 kubelet[2581]: E0302 13:17:07.254951 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:07.330119 kubelet[2581]: E0302 13:17:07.330078 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:07.330800 kubelet[2581]: I0302 13:17:07.330486 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qjknz" podStartSLOduration=6.255857962 podStartE2EDuration="27.33047321s" podCreationTimestamp="2026-03-02 13:16:40 +0000 UTC" firstStartedPulling="2026-03-02 13:16:41.19234923 +0000 UTC m=+5.552627078" lastFinishedPulling="2026-03-02 13:17:02.266964476 +0000 UTC m=+26.627242326" observedRunningTime="2026-03-02 13:17:07.329077478 +0000 UTC m=+31.689355358" watchObservedRunningTime="2026-03-02 13:17:07.33047321 +0000 UTC m=+31.690751089" Mar 2 13:17:07.342126 containerd[1459]: time="2026-03-02T13:17:07.340515766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s7qvj,Uid:c698ef30-ec09-4f0d-8ed2-4afd7a733b20,Namespace:kube-system,Attempt:0,}" Mar 2 13:17:07.348991 kubelet[2581]: E0302 13:17:07.348819 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:07.350084 containerd[1459]: time="2026-03-02T13:17:07.349860943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n9kc4,Uid:05c4a12e-b13f-425a-bd8a-9f6f0bfeb382,Namespace:kube-system,Attempt:0,}" Mar 2 13:17:07.398880 kubelet[2581]: I0302 13:17:07.398651 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-ff4tl" podStartSLOduration=2.720146199 podStartE2EDuration="27.397135358s" podCreationTimestamp="2026-03-02 13:16:40 +0000 UTC" firstStartedPulling="2026-03-02 13:16:41.47990029 +0000 UTC m=+5.840178140" lastFinishedPulling="2026-03-02 13:17:06.15688945 +0000 UTC m=+30.517167299" observedRunningTime="2026-03-02 13:17:07.393907782 +0000 UTC m=+31.754185631" watchObservedRunningTime="2026-03-02 13:17:07.397135358 +0000 UTC m=+31.757413207" Mar 2 13:17:08.261484 kubelet[2581]: E0302 13:17:08.259765 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:08.263529 kubelet[2581]: E0302 13:17:08.263475 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:09.268679 kubelet[2581]: E0302 13:17:09.265105 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:11.204702 systemd-networkd[1385]: cilium_host: Link UP Mar 2 13:17:11.206409 systemd-networkd[1385]: cilium_net: Link UP Mar 2 13:17:11.206416 systemd-networkd[1385]: cilium_net: Gained carrier Mar 2 13:17:11.206930 systemd-networkd[1385]: cilium_host: Gained carrier Mar 2 13:17:11.577991 systemd-networkd[1385]: cilium_vxlan: Link UP Mar 2 13:17:11.578004 systemd-networkd[1385]: cilium_vxlan: Gained carrier Mar 2 13:17:11.970868 kernel: NET: Registered PF_ALG protocol family Mar 2 13:17:12.091498 systemd-networkd[1385]: cilium_net: Gained IPv6LL Mar 2 13:17:12.223009 systemd-networkd[1385]: cilium_host: Gained IPv6LL Mar 2 13:17:13.563293 systemd-networkd[1385]: cilium_vxlan: Gained IPv6LL Mar 2 13:17:13.612328 systemd-networkd[1385]: lxc_health: Link UP Mar 2 13:17:13.623840 systemd-networkd[1385]: lxc_health: Gained carrier Mar 2 13:17:14.197948 systemd-networkd[1385]: lxc153e5c7b1357: Link UP Mar 2 13:17:14.255115 kernel: eth0: renamed from tmpea4b9 Mar 2 13:17:14.256668 systemd-networkd[1385]: lxc509da55777ca: Link UP Mar 2 13:17:14.281268 kernel: eth0: renamed from tmpd8da6 Mar 2 13:17:14.314265 systemd-networkd[1385]: lxc153e5c7b1357: Gained carrier Mar 2 13:17:14.317164 systemd-networkd[1385]: lxc509da55777ca: Gained carrier Mar 2 13:17:14.946258 kubelet[2581]: E0302 13:17:14.946158 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:15.295861 systemd-networkd[1385]: lxc_health: Gained IPv6LL Mar 2 13:17:15.322284 kubelet[2581]: E0302 13:17:15.321898 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:15.418900 systemd-networkd[1385]: lxc153e5c7b1357: Gained IPv6LL Mar 2 13:17:16.324156 systemd-networkd[1385]: lxc509da55777ca: Gained IPv6LL Mar 2 13:17:16.333454 kubelet[2581]: E0302 13:17:16.332819 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:24.907774 systemd[1]: Started sshd@9-10.0.0.96:22-10.0.0.1:45264.service - OpenSSH per-connection server daemon (10.0.0.1:45264). Mar 2 13:17:25.528472 sshd[3813]: Accepted publickey for core from 10.0.0.1 port 45264 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:17:25.551021 sshd[3813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:17:25.612501 systemd-logind[1441]: New session 10 of user core. Mar 2 13:17:25.645924 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 2 13:17:27.749979 sshd[3813]: pam_unix(sshd:session): session closed for user core Mar 2 13:17:27.770403 systemd[1]: sshd@9-10.0.0.96:22-10.0.0.1:45264.service: Deactivated successfully. Mar 2 13:17:27.783868 systemd[1]: session-10.scope: Deactivated successfully. Mar 2 13:17:27.784909 systemd[1]: session-10.scope: Consumed 1.443s CPU time. Mar 2 13:17:27.796251 systemd-logind[1441]: Session 10 logged out. Waiting for processes to exit. Mar 2 13:17:27.802492 systemd-logind[1441]: Removed session 10. Mar 2 13:17:29.891740 containerd[1459]: time="2026-03-02T13:17:29.891285770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:17:29.891740 containerd[1459]: time="2026-03-02T13:17:29.891412533Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:17:29.892356 containerd[1459]: time="2026-03-02T13:17:29.891758346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:17:29.894612 containerd[1459]: time="2026-03-02T13:17:29.893294457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:17:29.894612 containerd[1459]: time="2026-03-02T13:17:29.894157912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:17:29.894612 containerd[1459]: time="2026-03-02T13:17:29.894284526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:17:29.894612 containerd[1459]: time="2026-03-02T13:17:29.894304673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:17:29.895168 containerd[1459]: time="2026-03-02T13:17:29.894468765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:17:29.973766 systemd[1]: Started cri-containerd-ea4b9475548628231ddb4f0a0f1eae46cd9ab36e5e7b208791a447d66dec7a2d.scope - libcontainer container ea4b9475548628231ddb4f0a0f1eae46cd9ab36e5e7b208791a447d66dec7a2d. Mar 2 13:17:29.981775 systemd[1]: Started cri-containerd-d8da691155b7e10382017f858af07906274a18e7ce0dd48928bf1a242d4c8407.scope - libcontainer container d8da691155b7e10382017f858af07906274a18e7ce0dd48928bf1a242d4c8407. Mar 2 13:17:30.021051 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:17:30.026280 systemd-resolved[1389]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 2 13:17:30.098911 containerd[1459]: time="2026-03-02T13:17:30.098823365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n9kc4,Uid:05c4a12e-b13f-425a-bd8a-9f6f0bfeb382,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea4b9475548628231ddb4f0a0f1eae46cd9ab36e5e7b208791a447d66dec7a2d\"" Mar 2 13:17:30.106924 containerd[1459]: time="2026-03-02T13:17:30.105363172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s7qvj,Uid:c698ef30-ec09-4f0d-8ed2-4afd7a733b20,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8da691155b7e10382017f858af07906274a18e7ce0dd48928bf1a242d4c8407\"" Mar 2 13:17:30.107176 kubelet[2581]: E0302 13:17:30.106354 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:30.109120 kubelet[2581]: E0302 13:17:30.108933 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:30.123406 containerd[1459]: time="2026-03-02T13:17:30.123273480Z" level=info msg="CreateContainer within sandbox \"ea4b9475548628231ddb4f0a0f1eae46cd9ab36e5e7b208791a447d66dec7a2d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:17:30.132886 containerd[1459]: time="2026-03-02T13:17:30.132418117Z" level=info msg="CreateContainer within sandbox \"d8da691155b7e10382017f858af07906274a18e7ce0dd48928bf1a242d4c8407\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 2 13:17:30.171784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3296699063.mount: Deactivated successfully. Mar 2 13:17:30.180725 containerd[1459]: time="2026-03-02T13:17:30.180439842Z" level=info msg="CreateContainer within sandbox \"d8da691155b7e10382017f858af07906274a18e7ce0dd48928bf1a242d4c8407\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1ba80e6b65d3682f7cae4604d42efd9f7df058c6022a9f7f1f7f6bcabdd4f6f2\"" Mar 2 13:17:30.183371 containerd[1459]: time="2026-03-02T13:17:30.183271170Z" level=info msg="StartContainer for \"1ba80e6b65d3682f7cae4604d42efd9f7df058c6022a9f7f1f7f6bcabdd4f6f2\"" Mar 2 13:17:30.191228 containerd[1459]: time="2026-03-02T13:17:30.191009191Z" level=info msg="CreateContainer within sandbox \"ea4b9475548628231ddb4f0a0f1eae46cd9ab36e5e7b208791a447d66dec7a2d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ba6e2bd6ab38c7cf1f8f6b03c10a9b400e499738ede926b72a2e5968c2048fda\"" Mar 2 13:17:30.193038 containerd[1459]: time="2026-03-02T13:17:30.192891344Z" level=info msg="StartContainer for \"ba6e2bd6ab38c7cf1f8f6b03c10a9b400e499738ede926b72a2e5968c2048fda\"" Mar 2 13:17:30.253179 systemd[1]: Started cri-containerd-1ba80e6b65d3682f7cae4604d42efd9f7df058c6022a9f7f1f7f6bcabdd4f6f2.scope - libcontainer container 1ba80e6b65d3682f7cae4604d42efd9f7df058c6022a9f7f1f7f6bcabdd4f6f2. Mar 2 13:17:30.284241 systemd[1]: Started cri-containerd-ba6e2bd6ab38c7cf1f8f6b03c10a9b400e499738ede926b72a2e5968c2048fda.scope - libcontainer container ba6e2bd6ab38c7cf1f8f6b03c10a9b400e499738ede926b72a2e5968c2048fda. Mar 2 13:17:30.361147 containerd[1459]: time="2026-03-02T13:17:30.359655143Z" level=info msg="StartContainer for \"1ba80e6b65d3682f7cae4604d42efd9f7df058c6022a9f7f1f7f6bcabdd4f6f2\" returns successfully" Mar 2 13:17:30.376319 containerd[1459]: time="2026-03-02T13:17:30.376202674Z" level=info msg="StartContainer for \"ba6e2bd6ab38c7cf1f8f6b03c10a9b400e499738ede926b72a2e5968c2048fda\" returns successfully" Mar 2 13:17:30.591827 kubelet[2581]: E0302 13:17:30.590422 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:30.606816 kubelet[2581]: E0302 13:17:30.606649 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:30.689531 kubelet[2581]: I0302 13:17:30.689304 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-n9kc4" podStartSLOduration=50.689285047 podStartE2EDuration="50.689285047s" podCreationTimestamp="2026-03-02 13:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:17:30.648749524 +0000 UTC m=+55.009027384" watchObservedRunningTime="2026-03-02 13:17:30.689285047 +0000 UTC m=+55.049562926" Mar 2 13:17:30.692632 kubelet[2581]: I0302 13:17:30.691877 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-s7qvj" podStartSLOduration=50.691861525 podStartE2EDuration="50.691861525s" podCreationTimestamp="2026-03-02 13:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:17:30.689430177 +0000 UTC m=+55.049708036" watchObservedRunningTime="2026-03-02 13:17:30.691861525 +0000 UTC m=+55.052139374" Mar 2 13:17:30.905080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount765381470.mount: Deactivated successfully. Mar 2 13:17:31.646983 kubelet[2581]: E0302 13:17:31.646439 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:31.655413 kubelet[2581]: E0302 13:17:31.655204 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:32.652750 kubelet[2581]: E0302 13:17:32.652415 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:32.652750 kubelet[2581]: E0302 13:17:32.652423 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:32.763924 systemd[1]: Started sshd@10-10.0.0.96:22-10.0.0.1:52774.service - OpenSSH per-connection server daemon (10.0.0.1:52774). Mar 2 13:17:32.876774 sshd[4002]: Accepted publickey for core from 10.0.0.1 port 52774 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:17:32.880507 sshd[4002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:17:32.893251 systemd-logind[1441]: New session 11 of user core. Mar 2 13:17:32.897912 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 2 13:17:33.379513 sshd[4002]: pam_unix(sshd:session): session closed for user core Mar 2 13:17:33.388453 systemd[1]: sshd@10-10.0.0.96:22-10.0.0.1:52774.service: Deactivated successfully. Mar 2 13:17:33.392488 systemd[1]: session-11.scope: Deactivated successfully. Mar 2 13:17:33.395056 systemd-logind[1441]: Session 11 logged out. Waiting for processes to exit. Mar 2 13:17:33.398067 systemd-logind[1441]: Removed session 11. Mar 2 13:17:38.425381 systemd[1]: Started sshd@11-10.0.0.96:22-10.0.0.1:52778.service - OpenSSH per-connection server daemon (10.0.0.1:52778). Mar 2 13:17:38.483183 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 52778 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:17:38.485395 sshd[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:17:38.504233 systemd-logind[1441]: New session 12 of user core. Mar 2 13:17:38.515995 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 2 13:17:38.749897 sshd[4028]: pam_unix(sshd:session): session closed for user core Mar 2 13:17:38.756509 systemd[1]: sshd@11-10.0.0.96:22-10.0.0.1:52778.service: Deactivated successfully. Mar 2 13:17:38.759513 systemd[1]: session-12.scope: Deactivated successfully. Mar 2 13:17:38.763070 systemd-logind[1441]: Session 12 logged out. Waiting for processes to exit. Mar 2 13:17:38.766382 systemd-logind[1441]: Removed session 12. Mar 2 13:17:43.807797 systemd[1]: Started sshd@12-10.0.0.96:22-10.0.0.1:41158.service - OpenSSH per-connection server daemon (10.0.0.1:41158). Mar 2 13:17:43.872085 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 41158 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:17:43.875308 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:17:43.894018 systemd-logind[1441]: New session 13 of user core. Mar 2 13:17:43.896989 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 2 13:17:44.243816 sshd[4045]: pam_unix(sshd:session): session closed for user core Mar 2 13:17:44.248705 systemd[1]: sshd@12-10.0.0.96:22-10.0.0.1:41158.service: Deactivated successfully. Mar 2 13:17:44.261857 systemd[1]: session-13.scope: Deactivated successfully. Mar 2 13:17:44.267681 systemd-logind[1441]: Session 13 logged out. Waiting for processes to exit. Mar 2 13:17:44.271642 systemd-logind[1441]: Removed session 13. Mar 2 13:17:45.922297 kubelet[2581]: E0302 13:17:45.922195 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:49.270537 systemd[1]: Started sshd@13-10.0.0.96:22-10.0.0.1:55778.service - OpenSSH per-connection server daemon (10.0.0.1:55778). Mar 2 13:17:49.445463 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 55778 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:17:49.449005 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:17:49.464190 systemd-logind[1441]: New session 14 of user core. Mar 2 13:17:49.472254 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 2 13:17:49.675374 sshd[4061]: pam_unix(sshd:session): session closed for user core Mar 2 13:17:49.684256 systemd[1]: sshd@13-10.0.0.96:22-10.0.0.1:55778.service: Deactivated successfully. Mar 2 13:17:49.689094 systemd[1]: session-14.scope: Deactivated successfully. Mar 2 13:17:49.691801 systemd-logind[1441]: Session 14 logged out. Waiting for processes to exit. Mar 2 13:17:49.697723 systemd-logind[1441]: Removed session 14. Mar 2 13:17:49.921906 kubelet[2581]: E0302 13:17:49.920681 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:50.922796 kubelet[2581]: E0302 13:17:50.921976 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:17:54.714380 systemd[1]: Started sshd@14-10.0.0.96:22-10.0.0.1:55786.service - OpenSSH per-connection server daemon (10.0.0.1:55786). Mar 2 13:17:54.796084 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 55786 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:17:54.797483 sshd[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:17:54.812441 systemd-logind[1441]: New session 15 of user core. Mar 2 13:17:54.823615 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 2 13:17:55.064673 sshd[4076]: pam_unix(sshd:session): session closed for user core Mar 2 13:17:55.084674 systemd[1]: sshd@14-10.0.0.96:22-10.0.0.1:55786.service: Deactivated successfully. Mar 2 13:17:55.087729 systemd[1]: session-15.scope: Deactivated successfully. Mar 2 13:17:55.092403 systemd-logind[1441]: Session 15 logged out. Waiting for processes to exit. Mar 2 13:17:55.109454 systemd[1]: Started sshd@15-10.0.0.96:22-10.0.0.1:55794.service - OpenSSH per-connection server daemon (10.0.0.1:55794). Mar 2 13:17:55.114346 systemd-logind[1441]: Removed session 15. Mar 2 13:17:55.189202 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 55794 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:17:55.191999 sshd[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:17:55.215626 systemd-logind[1441]: New session 16 of user core. Mar 2 13:17:55.228192 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 2 13:17:55.670710 sshd[4091]: pam_unix(sshd:session): session closed for user core Mar 2 13:17:55.693519 systemd[1]: sshd@15-10.0.0.96:22-10.0.0.1:55794.service: Deactivated successfully. Mar 2 13:17:55.700309 systemd[1]: session-16.scope: Deactivated successfully. Mar 2 13:17:55.705886 systemd-logind[1441]: Session 16 logged out. Waiting for processes to exit. Mar 2 13:17:55.734830 systemd[1]: Started sshd@16-10.0.0.96:22-10.0.0.1:55800.service - OpenSSH per-connection server daemon (10.0.0.1:55800). Mar 2 13:17:55.740270 systemd-logind[1441]: Removed session 16. Mar 2 13:17:55.853914 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 55800 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:17:55.857496 sshd[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:17:55.870540 systemd-logind[1441]: New session 17 of user core. Mar 2 13:17:55.882328 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 2 13:17:56.104731 sshd[4103]: pam_unix(sshd:session): session closed for user core Mar 2 13:17:56.111475 systemd[1]: sshd@16-10.0.0.96:22-10.0.0.1:55800.service: Deactivated successfully. Mar 2 13:17:56.115629 systemd[1]: session-17.scope: Deactivated successfully. Mar 2 13:17:56.119358 systemd-logind[1441]: Session 17 logged out. Waiting for processes to exit. Mar 2 13:17:56.122996 systemd-logind[1441]: Removed session 17. Mar 2 13:18:01.137851 systemd[1]: Started sshd@17-10.0.0.96:22-10.0.0.1:49040.service - OpenSSH per-connection server daemon (10.0.0.1:49040). Mar 2 13:18:01.216063 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 49040 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:18:01.219460 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:18:01.242158 systemd-logind[1441]: New session 18 of user core. Mar 2 13:18:01.253908 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 2 13:18:01.464647 sshd[4118]: pam_unix(sshd:session): session closed for user core Mar 2 13:18:01.472045 systemd[1]: sshd@17-10.0.0.96:22-10.0.0.1:49040.service: Deactivated successfully. Mar 2 13:18:01.476248 systemd[1]: session-18.scope: Deactivated successfully. Mar 2 13:18:01.478430 systemd-logind[1441]: Session 18 logged out. Waiting for processes to exit. Mar 2 13:18:01.484028 systemd-logind[1441]: Removed session 18. Mar 2 13:18:06.507183 systemd[1]: Started sshd@18-10.0.0.96:22-10.0.0.1:49054.service - OpenSSH per-connection server daemon (10.0.0.1:49054). Mar 2 13:18:06.576194 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 49054 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:18:06.580059 sshd[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:18:06.593866 systemd-logind[1441]: New session 19 of user core. Mar 2 13:18:06.610017 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 2 13:18:06.786776 sshd[4133]: pam_unix(sshd:session): session closed for user core Mar 2 13:18:06.794483 systemd[1]: sshd@18-10.0.0.96:22-10.0.0.1:49054.service: Deactivated successfully. Mar 2 13:18:06.799322 systemd[1]: session-19.scope: Deactivated successfully. Mar 2 13:18:06.804039 systemd-logind[1441]: Session 19 logged out. Waiting for processes to exit. Mar 2 13:18:06.808524 systemd-logind[1441]: Removed session 19. Mar 2 13:18:09.922468 kubelet[2581]: E0302 13:18:09.920518 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:18:11.821498 systemd[1]: Started sshd@19-10.0.0.96:22-10.0.0.1:52468.service - OpenSSH per-connection server daemon (10.0.0.1:52468). Mar 2 13:18:11.885226 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 52468 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:18:11.889230 sshd[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:18:11.904498 systemd-logind[1441]: New session 20 of user core. Mar 2 13:18:11.916082 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 2 13:18:12.118036 sshd[4147]: pam_unix(sshd:session): session closed for user core Mar 2 13:18:12.128263 systemd[1]: sshd@19-10.0.0.96:22-10.0.0.1:52468.service: Deactivated successfully. Mar 2 13:18:12.145835 systemd[1]: session-20.scope: Deactivated successfully. Mar 2 13:18:12.148028 systemd-logind[1441]: Session 20 logged out. Waiting for processes to exit. Mar 2 13:18:12.150461 systemd-logind[1441]: Removed session 20. Mar 2 13:18:17.149083 systemd[1]: Started sshd@20-10.0.0.96:22-10.0.0.1:52484.service - OpenSSH per-connection server daemon (10.0.0.1:52484). Mar 2 13:18:17.221080 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 52484 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:18:17.229446 sshd[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:18:17.254688 systemd-logind[1441]: New session 21 of user core. Mar 2 13:18:17.266075 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 2 13:18:17.487858 sshd[4163]: pam_unix(sshd:session): session closed for user core Mar 2 13:18:17.493830 systemd[1]: sshd@20-10.0.0.96:22-10.0.0.1:52484.service: Deactivated successfully. Mar 2 13:18:17.497468 systemd[1]: session-21.scope: Deactivated successfully. Mar 2 13:18:17.508977 systemd-logind[1441]: Session 21 logged out. Waiting for processes to exit. Mar 2 13:18:17.514388 systemd-logind[1441]: Removed session 21. Mar 2 13:18:20.918856 kubelet[2581]: E0302 13:18:20.918744 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:18:22.525242 systemd[1]: Started sshd@21-10.0.0.96:22-10.0.0.1:45694.service - OpenSSH per-connection server daemon (10.0.0.1:45694). Mar 2 13:18:22.605837 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 45694 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:18:22.608446 sshd[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:18:22.620670 systemd-logind[1441]: New session 22 of user core. Mar 2 13:18:22.629895 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 2 13:18:22.919459 sshd[4178]: pam_unix(sshd:session): session closed for user core Mar 2 13:18:22.929538 systemd[1]: sshd@21-10.0.0.96:22-10.0.0.1:45694.service: Deactivated successfully. Mar 2 13:18:22.937645 systemd[1]: session-22.scope: Deactivated successfully. Mar 2 13:18:22.939269 systemd-logind[1441]: Session 22 logged out. Waiting for processes to exit. Mar 2 13:18:22.943798 systemd-logind[1441]: Removed session 22. Mar 2 13:18:27.954334 systemd[1]: Started sshd@22-10.0.0.96:22-10.0.0.1:45702.service - OpenSSH per-connection server daemon (10.0.0.1:45702). Mar 2 13:18:28.047094 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 45702 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:18:28.051492 sshd[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:18:28.071151 systemd-logind[1441]: New session 23 of user core. Mar 2 13:18:28.085052 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 2 13:18:28.322084 sshd[4192]: pam_unix(sshd:session): session closed for user core Mar 2 13:18:28.329424 systemd[1]: sshd@22-10.0.0.96:22-10.0.0.1:45702.service: Deactivated successfully. Mar 2 13:18:28.341125 systemd[1]: session-23.scope: Deactivated successfully. Mar 2 13:18:28.345058 systemd-logind[1441]: Session 23 logged out. Waiting for processes to exit. Mar 2 13:18:28.347903 systemd-logind[1441]: Removed session 23. Mar 2 13:18:33.356534 systemd[1]: Started sshd@23-10.0.0.96:22-10.0.0.1:47500.service - OpenSSH per-connection server daemon (10.0.0.1:47500). Mar 2 13:18:33.420353 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 47500 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:18:33.423486 sshd[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:18:33.442248 systemd-logind[1441]: New session 24 of user core. Mar 2 13:18:33.457000 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 2 13:18:33.694636 sshd[4206]: pam_unix(sshd:session): session closed for user core Mar 2 13:18:33.704453 systemd[1]: sshd@23-10.0.0.96:22-10.0.0.1:47500.service: Deactivated successfully. Mar 2 13:18:33.708194 systemd[1]: session-24.scope: Deactivated successfully. Mar 2 13:18:33.710739 systemd-logind[1441]: Session 24 logged out. Waiting for processes to exit. Mar 2 13:18:33.720504 systemd-logind[1441]: Removed session 24. Mar 2 13:18:36.919984 kubelet[2581]: E0302 13:18:36.919328 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:18:38.728372 systemd[1]: Started sshd@24-10.0.0.96:22-10.0.0.1:47510.service - OpenSSH per-connection server daemon (10.0.0.1:47510). Mar 2 13:18:38.774797 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 47510 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:18:38.779959 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:18:38.796922 systemd-logind[1441]: New session 25 of user core. Mar 2 13:18:38.809845 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 2 13:18:38.997791 sshd[4222]: pam_unix(sshd:session): session closed for user core Mar 2 13:18:39.007906 systemd[1]: sshd@24-10.0.0.96:22-10.0.0.1:47510.service: Deactivated successfully. Mar 2 13:18:39.011167 systemd[1]: session-25.scope: Deactivated successfully. Mar 2 13:18:39.013513 systemd-logind[1441]: Session 25 logged out. Waiting for processes to exit. Mar 2 13:18:39.017339 systemd-logind[1441]: Removed session 25. Mar 2 13:18:44.028852 systemd[1]: Started sshd@25-10.0.0.96:22-10.0.0.1:52584.service - OpenSSH per-connection server daemon (10.0.0.1:52584). Mar 2 13:18:44.085918 sshd[4238]: Accepted publickey for core from 10.0.0.1 port 52584 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:18:44.091225 sshd[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:18:44.108305 systemd-logind[1441]: New session 26 of user core. Mar 2 13:18:44.117134 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 2 13:18:44.342292 sshd[4238]: pam_unix(sshd:session): session closed for user core Mar 2 13:18:44.348344 systemd[1]: sshd@25-10.0.0.96:22-10.0.0.1:52584.service: Deactivated successfully. Mar 2 13:18:44.351316 systemd[1]: session-26.scope: Deactivated successfully. Mar 2 13:18:44.352689 systemd-logind[1441]: Session 26 logged out. Waiting for processes to exit. Mar 2 13:18:44.356743 systemd-logind[1441]: Removed session 26. Mar 2 13:18:49.361055 systemd[1]: Started sshd@26-10.0.0.96:22-10.0.0.1:55552.service - OpenSSH per-connection server daemon (10.0.0.1:55552). Mar 2 13:18:49.435190 sshd[4253]: Accepted publickey for core from 10.0.0.1 port 55552 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:18:49.438778 sshd[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:18:49.448770 systemd-logind[1441]: New session 27 of user core. Mar 2 13:18:49.461636 systemd[1]: Started session-27.scope - Session 27 of User core. Mar 2 13:18:49.652952 sshd[4253]: pam_unix(sshd:session): session closed for user core Mar 2 13:18:49.658432 systemd[1]: sshd@26-10.0.0.96:22-10.0.0.1:55552.service: Deactivated successfully. Mar 2 13:18:49.662429 systemd[1]: session-27.scope: Deactivated successfully. Mar 2 13:18:49.665170 systemd-logind[1441]: Session 27 logged out. Waiting for processes to exit. Mar 2 13:18:49.667802 systemd-logind[1441]: Removed session 27. Mar 2 13:18:49.919684 kubelet[2581]: E0302 13:18:49.919272 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:18:49.920305 kubelet[2581]: E0302 13:18:49.919681 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:18:54.682062 systemd[1]: Started sshd@27-10.0.0.96:22-10.0.0.1:55566.service - OpenSSH per-connection server daemon (10.0.0.1:55566). Mar 2 13:18:54.797978 sshd[4268]: Accepted publickey for core from 10.0.0.1 port 55566 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:18:54.802295 sshd[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:18:54.826396 systemd-logind[1441]: New session 28 of user core. Mar 2 13:18:54.849509 systemd[1]: Started session-28.scope - Session 28 of User core. Mar 2 13:18:55.164802 sshd[4268]: pam_unix(sshd:session): session closed for user core Mar 2 13:18:55.192972 systemd[1]: sshd@27-10.0.0.96:22-10.0.0.1:55566.service: Deactivated successfully. Mar 2 13:18:55.196731 systemd[1]: session-28.scope: Deactivated successfully. Mar 2 13:18:55.203379 systemd-logind[1441]: Session 28 logged out. Waiting for processes to exit. Mar 2 13:18:55.212381 systemd[1]: Started sshd@28-10.0.0.96:22-10.0.0.1:55578.service - OpenSSH per-connection server daemon (10.0.0.1:55578). Mar 2 13:18:55.217433 systemd-logind[1441]: Removed session 28. Mar 2 13:18:55.279362 sshd[4283]: Accepted publickey for core from 10.0.0.1 port 55578 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:18:55.282023 sshd[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:18:55.296230 systemd-logind[1441]: New session 29 of user core. Mar 2 13:18:55.315768 systemd[1]: Started session-29.scope - Session 29 of User core. Mar 2 13:18:55.925209 kubelet[2581]: E0302 13:18:55.923906 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:18:56.274433 sshd[4283]: pam_unix(sshd:session): session closed for user core Mar 2 13:18:56.301931 systemd[1]: sshd@28-10.0.0.96:22-10.0.0.1:55578.service: Deactivated successfully. Mar 2 13:18:56.311207 systemd[1]: session-29.scope: Deactivated successfully. Mar 2 13:18:56.319217 systemd-logind[1441]: Session 29 logged out. Waiting for processes to exit. Mar 2 13:18:56.365393 systemd[1]: Started sshd@29-10.0.0.96:22-10.0.0.1:55594.service - OpenSSH per-connection server daemon (10.0.0.1:55594). Mar 2 13:18:56.367187 systemd-logind[1441]: Removed session 29. Mar 2 13:18:56.445377 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 55594 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:18:56.450339 sshd[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:18:56.469482 systemd-logind[1441]: New session 30 of user core. Mar 2 13:18:56.482180 systemd[1]: Started session-30.scope - Session 30 of User core. Mar 2 13:18:57.890022 sshd[4295]: pam_unix(sshd:session): session closed for user core Mar 2 13:18:57.911840 systemd[1]: sshd@29-10.0.0.96:22-10.0.0.1:55594.service: Deactivated successfully. Mar 2 13:18:57.927485 kubelet[2581]: E0302 13:18:57.924952 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:18:57.942471 systemd[1]: session-30.scope: Deactivated successfully. Mar 2 13:18:57.947701 systemd-logind[1441]: Session 30 logged out. Waiting for processes to exit. Mar 2 13:18:57.961266 systemd[1]: Started sshd@30-10.0.0.96:22-10.0.0.1:55598.service - OpenSSH per-connection server daemon (10.0.0.1:55598). Mar 2 13:18:57.968419 systemd-logind[1441]: Removed session 30. Mar 2 13:18:58.040643 sshd[4316]: Accepted publickey for core from 10.0.0.1 port 55598 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:18:58.049073 sshd[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:18:58.065962 systemd-logind[1441]: New session 31 of user core. Mar 2 13:18:58.070885 systemd[1]: Started session-31.scope - Session 31 of User core. Mar 2 13:18:58.833184 sshd[4316]: pam_unix(sshd:session): session closed for user core Mar 2 13:18:58.849819 systemd[1]: sshd@30-10.0.0.96:22-10.0.0.1:55598.service: Deactivated successfully. Mar 2 13:18:58.856848 systemd[1]: session-31.scope: Deactivated successfully. Mar 2 13:18:58.861036 systemd-logind[1441]: Session 31 logged out. Waiting for processes to exit. Mar 2 13:18:58.876685 systemd[1]: Started sshd@31-10.0.0.96:22-10.0.0.1:36942.service - OpenSSH per-connection server daemon (10.0.0.1:36942). Mar 2 13:18:58.881660 systemd-logind[1441]: Removed session 31. Mar 2 13:18:58.946511 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 36942 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:18:58.949714 sshd[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:18:58.966171 systemd-logind[1441]: New session 32 of user core. Mar 2 13:18:58.976084 systemd[1]: Started session-32.scope - Session 32 of User core. Mar 2 13:18:59.310308 sshd[4329]: pam_unix(sshd:session): session closed for user core Mar 2 13:18:59.325426 systemd[1]: sshd@31-10.0.0.96:22-10.0.0.1:36942.service: Deactivated successfully. Mar 2 13:18:59.333449 systemd[1]: session-32.scope: Deactivated successfully. Mar 2 13:18:59.343541 systemd-logind[1441]: Session 32 logged out. Waiting for processes to exit. Mar 2 13:18:59.347313 systemd-logind[1441]: Removed session 32. Mar 2 13:19:00.920086 kubelet[2581]: E0302 13:19:00.920047 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:19:04.380703 systemd[1]: Started sshd@32-10.0.0.96:22-10.0.0.1:36946.service - OpenSSH per-connection server daemon (10.0.0.1:36946). Mar 2 13:19:04.463838 sshd[4344]: Accepted publickey for core from 10.0.0.1 port 36946 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:19:04.468367 sshd[4344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:19:04.481415 systemd-logind[1441]: New session 33 of user core. Mar 2 13:19:04.490904 systemd[1]: Started session-33.scope - Session 33 of User core. Mar 2 13:19:04.799491 sshd[4344]: pam_unix(sshd:session): session closed for user core Mar 2 13:19:04.809232 systemd[1]: sshd@32-10.0.0.96:22-10.0.0.1:36946.service: Deactivated successfully. Mar 2 13:19:04.812483 systemd[1]: session-33.scope: Deactivated successfully. Mar 2 13:19:04.815200 systemd-logind[1441]: Session 33 logged out. Waiting for processes to exit. Mar 2 13:19:04.818261 systemd-logind[1441]: Removed session 33. Mar 2 13:19:09.817539 systemd[1]: Started sshd@33-10.0.0.96:22-10.0.0.1:36214.service - OpenSSH per-connection server daemon (10.0.0.1:36214). Mar 2 13:19:09.933225 sshd[4360]: Accepted publickey for core from 10.0.0.1 port 36214 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:19:09.940039 sshd[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:19:09.959934 systemd-logind[1441]: New session 34 of user core. Mar 2 13:19:09.971046 systemd[1]: Started session-34.scope - Session 34 of User core. Mar 2 13:19:10.186980 sshd[4360]: pam_unix(sshd:session): session closed for user core Mar 2 13:19:10.194687 systemd[1]: sshd@33-10.0.0.96:22-10.0.0.1:36214.service: Deactivated successfully. Mar 2 13:19:10.197475 systemd[1]: session-34.scope: Deactivated successfully. Mar 2 13:19:10.199517 systemd-logind[1441]: Session 34 logged out. Waiting for processes to exit. Mar 2 13:19:10.203746 systemd-logind[1441]: Removed session 34. Mar 2 13:19:15.212486 systemd[1]: Started sshd@34-10.0.0.96:22-10.0.0.1:36218.service - OpenSSH per-connection server daemon (10.0.0.1:36218). Mar 2 13:19:15.299730 sshd[4380]: Accepted publickey for core from 10.0.0.1 port 36218 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:19:15.303626 sshd[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:19:15.321219 systemd-logind[1441]: New session 35 of user core. Mar 2 13:19:15.331231 systemd[1]: Started session-35.scope - Session 35 of User core. Mar 2 13:19:15.556842 sshd[4380]: pam_unix(sshd:session): session closed for user core Mar 2 13:19:15.565428 systemd[1]: sshd@34-10.0.0.96:22-10.0.0.1:36218.service: Deactivated successfully. Mar 2 13:19:15.569119 systemd[1]: session-35.scope: Deactivated successfully. Mar 2 13:19:15.571934 systemd-logind[1441]: Session 35 logged out. Waiting for processes to exit. Mar 2 13:19:15.576850 systemd-logind[1441]: Removed session 35. Mar 2 13:19:20.617507 systemd[1]: Started sshd@35-10.0.0.96:22-10.0.0.1:54268.service - OpenSSH per-connection server daemon (10.0.0.1:54268). Mar 2 13:19:20.696257 sshd[4394]: Accepted publickey for core from 10.0.0.1 port 54268 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:19:20.699830 sshd[4394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:19:20.709075 systemd-logind[1441]: New session 36 of user core. Mar 2 13:19:20.720307 systemd[1]: Started session-36.scope - Session 36 of User core. Mar 2 13:19:20.918357 sshd[4394]: pam_unix(sshd:session): session closed for user core Mar 2 13:19:20.954748 systemd[1]: sshd@35-10.0.0.96:22-10.0.0.1:54268.service: Deactivated successfully. Mar 2 13:19:20.957491 systemd[1]: session-36.scope: Deactivated successfully. Mar 2 13:19:20.960258 systemd-logind[1441]: Session 36 logged out. Waiting for processes to exit. Mar 2 13:19:20.968969 systemd[1]: Started sshd@36-10.0.0.96:22-10.0.0.1:54270.service - OpenSSH per-connection server daemon (10.0.0.1:54270). Mar 2 13:19:20.972413 systemd-logind[1441]: Removed session 36. Mar 2 13:19:21.029362 sshd[4409]: Accepted publickey for core from 10.0.0.1 port 54270 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:19:21.031411 sshd[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:19:21.056729 systemd-logind[1441]: New session 37 of user core. Mar 2 13:19:21.062882 systemd[1]: Started session-37.scope - Session 37 of User core. Mar 2 13:19:23.064510 containerd[1459]: time="2026-03-02T13:19:23.064352993Z" level=info msg="StopContainer for \"b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f\" with timeout 30 (s)" Mar 2 13:19:23.065271 containerd[1459]: time="2026-03-02T13:19:23.064888619Z" level=info msg="Stop container \"b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f\" with signal terminated" Mar 2 13:19:23.112856 systemd[1]: cri-containerd-b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f.scope: Deactivated successfully. Mar 2 13:19:23.114091 systemd[1]: cri-containerd-b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f.scope: Consumed 1.332s CPU time. Mar 2 13:19:23.179843 containerd[1459]: time="2026-03-02T13:19:23.179764923Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 2 13:19:23.211803 containerd[1459]: time="2026-03-02T13:19:23.211458720Z" level=info msg="StopContainer for \"956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3\" with timeout 2 (s)" Mar 2 13:19:23.213160 containerd[1459]: time="2026-03-02T13:19:23.212410039Z" level=info msg="Stop container \"956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3\" with signal terminated" Mar 2 13:19:23.247246 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f-rootfs.mount: Deactivated successfully. Mar 2 13:19:23.256039 systemd-networkd[1385]: lxc_health: Link DOWN Mar 2 13:19:23.256050 systemd-networkd[1385]: lxc_health: Lost carrier Mar 2 13:19:23.279083 containerd[1459]: time="2026-03-02T13:19:23.279018017Z" level=info msg="shim disconnected" id=b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f namespace=k8s.io Mar 2 13:19:23.279083 containerd[1459]: time="2026-03-02T13:19:23.279078093Z" level=warning msg="cleaning up after shim disconnected" id=b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f namespace=k8s.io Mar 2 13:19:23.279083 containerd[1459]: time="2026-03-02T13:19:23.279091998Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:19:23.312746 systemd[1]: cri-containerd-956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3.scope: Deactivated successfully. Mar 2 13:19:23.313790 systemd[1]: cri-containerd-956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3.scope: Consumed 20.016s CPU time. Mar 2 13:19:23.344150 containerd[1459]: time="2026-03-02T13:19:23.343202327Z" level=info msg="StopContainer for \"b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f\" returns successfully" Mar 2 13:19:23.349890 containerd[1459]: time="2026-03-02T13:19:23.348653236Z" level=info msg="StopPodSandbox for \"84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a\"" Mar 2 13:19:23.349890 containerd[1459]: time="2026-03-02T13:19:23.348703325Z" level=info msg="Container to stop \"b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:19:23.354089 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a-shm.mount: Deactivated successfully. Mar 2 13:19:23.365065 systemd[1]: cri-containerd-84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a.scope: Deactivated successfully. Mar 2 13:19:23.386217 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3-rootfs.mount: Deactivated successfully. Mar 2 13:19:23.425345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a-rootfs.mount: Deactivated successfully. Mar 2 13:19:23.426313 containerd[1459]: time="2026-03-02T13:19:23.425538643Z" level=info msg="shim disconnected" id=84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a namespace=k8s.io Mar 2 13:19:23.426313 containerd[1459]: time="2026-03-02T13:19:23.425659137Z" level=warning msg="cleaning up after shim disconnected" id=84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a namespace=k8s.io Mar 2 13:19:23.426313 containerd[1459]: time="2026-03-02T13:19:23.425673894Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:19:23.429303 containerd[1459]: time="2026-03-02T13:19:23.429255378Z" level=info msg="shim disconnected" id=956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3 namespace=k8s.io Mar 2 13:19:23.429789 containerd[1459]: time="2026-03-02T13:19:23.429414500Z" level=warning msg="cleaning up after shim disconnected" id=956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3 namespace=k8s.io Mar 2 13:19:23.429789 containerd[1459]: time="2026-03-02T13:19:23.429436560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:19:23.461298 containerd[1459]: time="2026-03-02T13:19:23.461244953Z" level=warning msg="cleanup warnings time=\"2026-03-02T13:19:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 2 13:19:23.468465 containerd[1459]: time="2026-03-02T13:19:23.466263655Z" level=warning msg="cleanup warnings time=\"2026-03-02T13:19:23Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 2 13:19:23.476310 containerd[1459]: time="2026-03-02T13:19:23.476051768Z" level=info msg="StopContainer for \"956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3\" returns successfully" Mar 2 13:19:23.477658 containerd[1459]: time="2026-03-02T13:19:23.477152109Z" level=info msg="StopPodSandbox for \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\"" Mar 2 13:19:23.477658 containerd[1459]: time="2026-03-02T13:19:23.477200366Z" level=info msg="Container to stop \"5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:19:23.477658 containerd[1459]: time="2026-03-02T13:19:23.477222724Z" level=info msg="Container to stop \"7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:19:23.477658 containerd[1459]: time="2026-03-02T13:19:23.477240316Z" level=info msg="Container to stop \"77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:19:23.477658 containerd[1459]: time="2026-03-02T13:19:23.477255183Z" level=info msg="Container to stop \"cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:19:23.477658 containerd[1459]: time="2026-03-02T13:19:23.477272774Z" level=info msg="Container to stop \"956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 2 13:19:23.509027 containerd[1459]: time="2026-03-02T13:19:23.508017979Z" level=info msg="TearDown network for sandbox \"84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a\" successfully" Mar 2 13:19:23.509027 containerd[1459]: time="2026-03-02T13:19:23.508094935Z" level=info msg="StopPodSandbox for \"84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a\" returns successfully" Mar 2 13:19:23.513316 systemd[1]: cri-containerd-9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea.scope: Deactivated successfully. Mar 2 13:19:23.624183 kubelet[2581]: I0302 13:19:23.622614 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49lq9\" (UniqueName: \"kubernetes.io/projected/6b9e107c-328c-4cd0-ab60-66ddbd87ee7f-kube-api-access-49lq9\") pod \"6b9e107c-328c-4cd0-ab60-66ddbd87ee7f\" (UID: \"6b9e107c-328c-4cd0-ab60-66ddbd87ee7f\") " Mar 2 13:19:23.624183 kubelet[2581]: I0302 13:19:23.622700 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b9e107c-328c-4cd0-ab60-66ddbd87ee7f-cilium-config-path\") pod \"6b9e107c-328c-4cd0-ab60-66ddbd87ee7f\" (UID: \"6b9e107c-328c-4cd0-ab60-66ddbd87ee7f\") " Mar 2 13:19:23.647402 kubelet[2581]: I0302 13:19:23.646834 2581 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b9e107c-328c-4cd0-ab60-66ddbd87ee7f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6b9e107c-328c-4cd0-ab60-66ddbd87ee7f" (UID: "6b9e107c-328c-4cd0-ab60-66ddbd87ee7f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:19:23.652280 kubelet[2581]: I0302 13:19:23.652104 2581 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b9e107c-328c-4cd0-ab60-66ddbd87ee7f-kube-api-access-49lq9" (OuterVolumeSpecName: "kube-api-access-49lq9") pod "6b9e107c-328c-4cd0-ab60-66ddbd87ee7f" (UID: "6b9e107c-328c-4cd0-ab60-66ddbd87ee7f"). InnerVolumeSpecName "kube-api-access-49lq9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:19:23.663913 containerd[1459]: time="2026-03-02T13:19:23.662797319Z" level=info msg="shim disconnected" id=9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea namespace=k8s.io Mar 2 13:19:23.663913 containerd[1459]: time="2026-03-02T13:19:23.662920387Z" level=warning msg="cleaning up after shim disconnected" id=9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea namespace=k8s.io Mar 2 13:19:23.663913 containerd[1459]: time="2026-03-02T13:19:23.662941134Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:19:23.689428 containerd[1459]: time="2026-03-02T13:19:23.689288610Z" level=info msg="TearDown network for sandbox \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\" successfully" Mar 2 13:19:23.689428 containerd[1459]: time="2026-03-02T13:19:23.689349698Z" level=info msg="StopPodSandbox for \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\" returns successfully" Mar 2 13:19:23.726424 kubelet[2581]: I0302 13:19:23.726303 2581 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-49lq9\" (UniqueName: \"kubernetes.io/projected/6b9e107c-328c-4cd0-ab60-66ddbd87ee7f-kube-api-access-49lq9\") on node \"localhost\" DevicePath \"\"" Mar 2 13:19:23.726424 kubelet[2581]: I0302 13:19:23.726382 2581 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6b9e107c-328c-4cd0-ab60-66ddbd87ee7f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 2 13:19:23.827369 kubelet[2581]: I0302 13:19:23.827282 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-hostproc\") pod \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " Mar 2 13:19:23.827369 kubelet[2581]: I0302 13:19:23.827360 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-host-proc-sys-kernel\") pod \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " Mar 2 13:19:23.827622 kubelet[2581]: I0302 13:19:23.827386 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-lib-modules\") pod \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " Mar 2 13:19:23.827622 kubelet[2581]: I0302 13:19:23.827405 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-etc-cni-netd\") pod \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " Mar 2 13:19:23.827622 kubelet[2581]: I0302 13:19:23.827442 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a7bee1b3-a132-4cc0-a032-d16919f8e65b-cilium-config-path\") pod \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " Mar 2 13:19:23.827622 kubelet[2581]: I0302 13:19:23.827466 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnqnd\" (UniqueName: \"kubernetes.io/projected/a7bee1b3-a132-4cc0-a032-d16919f8e65b-kube-api-access-pnqnd\") pod \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " Mar 2 13:19:23.827622 kubelet[2581]: I0302 13:19:23.827485 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-bpf-maps\") pod \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " Mar 2 13:19:23.827622 kubelet[2581]: I0302 13:19:23.827504 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-xtables-lock\") pod \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " Mar 2 13:19:23.827769 kubelet[2581]: I0302 13:19:23.827524 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-cilium-cgroup\") pod \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " Mar 2 13:19:23.831698 kubelet[2581]: I0302 13:19:23.829642 2581 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a7bee1b3-a132-4cc0-a032-d16919f8e65b" (UID: "a7bee1b3-a132-4cc0-a032-d16919f8e65b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:19:23.831698 kubelet[2581]: I0302 13:19:23.829713 2581 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a7bee1b3-a132-4cc0-a032-d16919f8e65b" (UID: "a7bee1b3-a132-4cc0-a032-d16919f8e65b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:19:23.831698 kubelet[2581]: I0302 13:19:23.829738 2581 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a7bee1b3-a132-4cc0-a032-d16919f8e65b" (UID: "a7bee1b3-a132-4cc0-a032-d16919f8e65b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:19:23.831698 kubelet[2581]: I0302 13:19:23.829763 2581 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-hostproc" (OuterVolumeSpecName: "hostproc") pod "a7bee1b3-a132-4cc0-a032-d16919f8e65b" (UID: "a7bee1b3-a132-4cc0-a032-d16919f8e65b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:19:23.831698 kubelet[2581]: I0302 13:19:23.829782 2581 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a7bee1b3-a132-4cc0-a032-d16919f8e65b" (UID: "a7bee1b3-a132-4cc0-a032-d16919f8e65b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:19:23.832068 kubelet[2581]: I0302 13:19:23.829802 2581 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a7bee1b3-a132-4cc0-a032-d16919f8e65b" (UID: "a7bee1b3-a132-4cc0-a032-d16919f8e65b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:19:23.832068 kubelet[2581]: I0302 13:19:23.829822 2581 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a7bee1b3-a132-4cc0-a032-d16919f8e65b" (UID: "a7bee1b3-a132-4cc0-a032-d16919f8e65b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:19:23.832068 kubelet[2581]: I0302 13:19:23.830399 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-host-proc-sys-net\") pod \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " Mar 2 13:19:23.832068 kubelet[2581]: I0302 13:19:23.830522 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-cilium-run\") pod \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " Mar 2 13:19:23.832068 kubelet[2581]: I0302 13:19:23.830654 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a7bee1b3-a132-4cc0-a032-d16919f8e65b-hubble-tls\") pod \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " Mar 2 13:19:23.832068 kubelet[2581]: I0302 13:19:23.830700 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a7bee1b3-a132-4cc0-a032-d16919f8e65b-clustermesh-secrets\") pod \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " Mar 2 13:19:23.832270 kubelet[2581]: I0302 13:19:23.830721 2581 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-cni-path\") pod \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\" (UID: \"a7bee1b3-a132-4cc0-a032-d16919f8e65b\") " Mar 2 13:19:23.832270 kubelet[2581]: I0302 13:19:23.830720 2581 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a7bee1b3-a132-4cc0-a032-d16919f8e65b" (UID: "a7bee1b3-a132-4cc0-a032-d16919f8e65b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:19:23.832270 kubelet[2581]: I0302 13:19:23.830757 2581 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a7bee1b3-a132-4cc0-a032-d16919f8e65b" (UID: "a7bee1b3-a132-4cc0-a032-d16919f8e65b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:19:23.832270 kubelet[2581]: I0302 13:19:23.830802 2581 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Mar 2 13:19:23.832270 kubelet[2581]: I0302 13:19:23.830819 2581 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Mar 2 13:19:23.832270 kubelet[2581]: I0302 13:19:23.830832 2581 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Mar 2 13:19:23.832270 kubelet[2581]: I0302 13:19:23.830846 2581 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Mar 2 13:19:23.832740 kubelet[2581]: I0302 13:19:23.830909 2581 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-cilium-run\") on node \"localhost\" DevicePath \"\"" Mar 2 13:19:23.832740 kubelet[2581]: I0302 13:19:23.830922 2581 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-hostproc\") on node \"localhost\" DevicePath \"\"" Mar 2 13:19:23.832740 kubelet[2581]: I0302 13:19:23.830935 2581 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Mar 2 13:19:23.832740 kubelet[2581]: I0302 13:19:23.830953 2581 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-lib-modules\") on node \"localhost\" DevicePath \"\"" Mar 2 13:19:23.832740 kubelet[2581]: I0302 13:19:23.831742 2581 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-cni-path" (OuterVolumeSpecName: "cni-path") pod "a7bee1b3-a132-4cc0-a032-d16919f8e65b" (UID: "a7bee1b3-a132-4cc0-a032-d16919f8e65b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Mar 2 13:19:23.847190 kubelet[2581]: I0302 13:19:23.847102 2581 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7bee1b3-a132-4cc0-a032-d16919f8e65b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a7bee1b3-a132-4cc0-a032-d16919f8e65b" (UID: "a7bee1b3-a132-4cc0-a032-d16919f8e65b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 2 13:19:23.858009 kubelet[2581]: I0302 13:19:23.857366 2581 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7bee1b3-a132-4cc0-a032-d16919f8e65b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a7bee1b3-a132-4cc0-a032-d16919f8e65b" (UID: "a7bee1b3-a132-4cc0-a032-d16919f8e65b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 2 13:19:23.859341 kubelet[2581]: I0302 13:19:23.858825 2581 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7bee1b3-a132-4cc0-a032-d16919f8e65b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a7bee1b3-a132-4cc0-a032-d16919f8e65b" (UID: "a7bee1b3-a132-4cc0-a032-d16919f8e65b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:19:23.869773 kubelet[2581]: I0302 13:19:23.869517 2581 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7bee1b3-a132-4cc0-a032-d16919f8e65b-kube-api-access-pnqnd" (OuterVolumeSpecName: "kube-api-access-pnqnd") pod "a7bee1b3-a132-4cc0-a032-d16919f8e65b" (UID: "a7bee1b3-a132-4cc0-a032-d16919f8e65b"). InnerVolumeSpecName "kube-api-access-pnqnd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 2 13:19:23.940351 kubelet[2581]: I0302 13:19:23.940150 2581 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-cni-path\") on node \"localhost\" DevicePath \"\"" Mar 2 13:19:23.940351 kubelet[2581]: I0302 13:19:23.940234 2581 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a7bee1b3-a132-4cc0-a032-d16919f8e65b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Mar 2 13:19:23.940351 kubelet[2581]: I0302 13:19:23.940250 2581 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pnqnd\" (UniqueName: \"kubernetes.io/projected/a7bee1b3-a132-4cc0-a032-d16919f8e65b-kube-api-access-pnqnd\") on node \"localhost\" DevicePath \"\"" Mar 2 13:19:23.940351 kubelet[2581]: I0302 13:19:23.940263 2581 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a7bee1b3-a132-4cc0-a032-d16919f8e65b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Mar 2 13:19:23.940351 kubelet[2581]: I0302 13:19:23.940275 2581 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a7bee1b3-a132-4cc0-a032-d16919f8e65b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Mar 2 13:19:23.940351 kubelet[2581]: I0302 13:19:23.940290 2581 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a7bee1b3-a132-4cc0-a032-d16919f8e65b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Mar 2 13:19:23.951658 systemd[1]: Removed slice kubepods-besteffort-pod6b9e107c_328c_4cd0_ab60_66ddbd87ee7f.slice - libcontainer container kubepods-besteffort-pod6b9e107c_328c_4cd0_ab60_66ddbd87ee7f.slice. Mar 2 13:19:23.951946 systemd[1]: kubepods-besteffort-pod6b9e107c_328c_4cd0_ab60_66ddbd87ee7f.slice: Consumed 1.377s CPU time. Mar 2 13:19:23.954467 systemd[1]: Removed slice kubepods-burstable-poda7bee1b3_a132_4cc0_a032_d16919f8e65b.slice - libcontainer container kubepods-burstable-poda7bee1b3_a132_4cc0_a032_d16919f8e65b.slice. Mar 2 13:19:23.955010 systemd[1]: kubepods-burstable-poda7bee1b3_a132_4cc0_a032_d16919f8e65b.slice: Consumed 20.243s CPU time. Mar 2 13:19:24.128469 systemd[1]: var-lib-kubelet-pods-6b9e107c\x2d328c\x2d4cd0\x2dab60\x2d66ddbd87ee7f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d49lq9.mount: Deactivated successfully. Mar 2 13:19:24.128746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea-rootfs.mount: Deactivated successfully. Mar 2 13:19:24.128912 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea-shm.mount: Deactivated successfully. Mar 2 13:19:24.129034 systemd[1]: var-lib-kubelet-pods-a7bee1b3\x2da132\x2d4cc0\x2da032\x2dd16919f8e65b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpnqnd.mount: Deactivated successfully. Mar 2 13:19:24.129146 systemd[1]: var-lib-kubelet-pods-a7bee1b3\x2da132\x2d4cc0\x2da032\x2dd16919f8e65b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Mar 2 13:19:24.129260 systemd[1]: var-lib-kubelet-pods-a7bee1b3\x2da132\x2d4cc0\x2da032\x2dd16919f8e65b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Mar 2 13:19:24.273624 kubelet[2581]: I0302 13:19:24.257632 2581 scope.go:117] "RemoveContainer" containerID="b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f" Mar 2 13:19:24.273774 containerd[1459]: time="2026-03-02T13:19:24.260997914Z" level=info msg="RemoveContainer for \"b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f\"" Mar 2 13:19:24.292197 containerd[1459]: time="2026-03-02T13:19:24.291462286Z" level=info msg="RemoveContainer for \"b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f\" returns successfully" Mar 2 13:19:24.292389 kubelet[2581]: I0302 13:19:24.291993 2581 scope.go:117] "RemoveContainer" containerID="b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f" Mar 2 13:19:24.301161 containerd[1459]: time="2026-03-02T13:19:24.298972507Z" level=error msg="ContainerStatus for \"b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f\": not found" Mar 2 13:19:24.320890 kubelet[2581]: E0302 13:19:24.319437 2581 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f\": not found" containerID="b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f" Mar 2 13:19:24.320890 kubelet[2581]: I0302 13:19:24.319530 2581 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f"} err="failed to get container status \"b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9be2f58d800caebffc677c1317d744ab55b224b9f04754d277a0fabb1e09c6f\": not found" Mar 2 13:19:24.320890 kubelet[2581]: I0302 13:19:24.319668 2581 scope.go:117] "RemoveContainer" containerID="956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3" Mar 2 13:19:24.323874 containerd[1459]: time="2026-03-02T13:19:24.323472787Z" level=info msg="RemoveContainer for \"956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3\"" Mar 2 13:19:24.330410 containerd[1459]: time="2026-03-02T13:19:24.330257126Z" level=info msg="RemoveContainer for \"956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3\" returns successfully" Mar 2 13:19:24.331698 kubelet[2581]: I0302 13:19:24.331520 2581 scope.go:117] "RemoveContainer" containerID="5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061" Mar 2 13:19:24.342372 containerd[1459]: time="2026-03-02T13:19:24.342235855Z" level=info msg="RemoveContainer for \"5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061\"" Mar 2 13:19:24.350612 containerd[1459]: time="2026-03-02T13:19:24.350458685Z" level=info msg="RemoveContainer for \"5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061\" returns successfully" Mar 2 13:19:24.351251 kubelet[2581]: I0302 13:19:24.351078 2581 scope.go:117] "RemoveContainer" containerID="cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7" Mar 2 13:19:24.353952 containerd[1459]: time="2026-03-02T13:19:24.353492314Z" level=info msg="RemoveContainer for \"cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7\"" Mar 2 13:19:24.363775 containerd[1459]: time="2026-03-02T13:19:24.363620526Z" level=info msg="RemoveContainer for \"cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7\" returns successfully" Mar 2 13:19:24.364450 kubelet[2581]: I0302 13:19:24.363959 2581 scope.go:117] "RemoveContainer" containerID="77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf" Mar 2 13:19:24.368205 containerd[1459]: time="2026-03-02T13:19:24.368175528Z" level=info msg="RemoveContainer for \"77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf\"" Mar 2 13:19:24.376490 containerd[1459]: time="2026-03-02T13:19:24.376360788Z" level=info msg="RemoveContainer for \"77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf\" returns successfully" Mar 2 13:19:24.377617 kubelet[2581]: I0302 13:19:24.377417 2581 scope.go:117] "RemoveContainer" containerID="7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28" Mar 2 13:19:24.380424 containerd[1459]: time="2026-03-02T13:19:24.380318145Z" level=info msg="RemoveContainer for \"7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28\"" Mar 2 13:19:24.386227 containerd[1459]: time="2026-03-02T13:19:24.386080586Z" level=info msg="RemoveContainer for \"7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28\" returns successfully" Mar 2 13:19:24.386533 kubelet[2581]: I0302 13:19:24.386374 2581 scope.go:117] "RemoveContainer" containerID="956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3" Mar 2 13:19:24.386957 containerd[1459]: time="2026-03-02T13:19:24.386899349Z" level=error msg="ContainerStatus for \"956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3\": not found" Mar 2 13:19:24.387233 kubelet[2581]: E0302 13:19:24.387195 2581 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3\": not found" containerID="956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3" Mar 2 13:19:24.387388 kubelet[2581]: I0302 13:19:24.387234 2581 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3"} err="failed to get container status \"956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"956dcbabbae829d49446b504c7dfabc45fb195b6f3ba9c51d9cb714b5268b6d3\": not found" Mar 2 13:19:24.387388 kubelet[2581]: I0302 13:19:24.387265 2581 scope.go:117] "RemoveContainer" containerID="5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061" Mar 2 13:19:24.387725 containerd[1459]: time="2026-03-02T13:19:24.387675905Z" level=error msg="ContainerStatus for \"5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061\": not found" Mar 2 13:19:24.388060 kubelet[2581]: E0302 13:19:24.387951 2581 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061\": not found" containerID="5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061" Mar 2 13:19:24.388060 kubelet[2581]: I0302 13:19:24.387999 2581 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061"} err="failed to get container status \"5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f33581abda5ccc6fcad79d8e5a86494268aa6d174122254290237b452b7c061\": not found" Mar 2 13:19:24.388060 kubelet[2581]: I0302 13:19:24.388022 2581 scope.go:117] "RemoveContainer" containerID="cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7" Mar 2 13:19:24.388371 containerd[1459]: time="2026-03-02T13:19:24.388319335Z" level=error msg="ContainerStatus for \"cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7\": not found" Mar 2 13:19:24.388693 kubelet[2581]: E0302 13:19:24.388657 2581 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7\": not found" containerID="cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7" Mar 2 13:19:24.388750 kubelet[2581]: I0302 13:19:24.388693 2581 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7"} err="failed to get container status \"cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb9c73a7a98cb4d79eaaaf160efacc6a0606ab8e8f46b6ffebe5685789297fb7\": not found" Mar 2 13:19:24.388750 kubelet[2581]: I0302 13:19:24.388708 2581 scope.go:117] "RemoveContainer" containerID="77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf" Mar 2 13:19:24.388949 containerd[1459]: time="2026-03-02T13:19:24.388917143Z" level=error msg="ContainerStatus for \"77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf\": not found" Mar 2 13:19:24.389374 kubelet[2581]: E0302 13:19:24.389348 2581 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf\": not found" containerID="77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf" Mar 2 13:19:24.389468 kubelet[2581]: I0302 13:19:24.389379 2581 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf"} err="failed to get container status \"77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf\": rpc error: code = NotFound desc = an error occurred when try to find container \"77396a9dacc3384e89ea87e6abe8f20c814894fe422b3d9761a35b7bacca5ccf\": not found" Mar 2 13:19:24.389468 kubelet[2581]: I0302 13:19:24.389431 2581 scope.go:117] "RemoveContainer" containerID="7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28" Mar 2 13:19:24.389958 containerd[1459]: time="2026-03-02T13:19:24.389870645Z" level=error msg="ContainerStatus for \"7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28\": not found" Mar 2 13:19:24.390179 kubelet[2581]: E0302 13:19:24.390112 2581 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28\": not found" containerID="7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28" Mar 2 13:19:24.390179 kubelet[2581]: I0302 13:19:24.390145 2581 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28"} err="failed to get container status \"7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c5ef3e0e3ef80755aa7c300926d282af64ba37a08e955e4ec1e176f8b666a28\": not found" Mar 2 13:19:24.840301 sshd[4409]: pam_unix(sshd:session): session closed for user core Mar 2 13:19:24.851893 systemd[1]: sshd@36-10.0.0.96:22-10.0.0.1:54270.service: Deactivated successfully. Mar 2 13:19:24.854010 systemd[1]: session-37.scope: Deactivated successfully. Mar 2 13:19:24.854354 systemd[1]: session-37.scope: Consumed 1.100s CPU time. Mar 2 13:19:24.855469 systemd-logind[1441]: Session 37 logged out. Waiting for processes to exit. Mar 2 13:19:24.867124 systemd[1]: Started sshd@37-10.0.0.96:22-10.0.0.1:54278.service - OpenSSH per-connection server daemon (10.0.0.1:54278). Mar 2 13:19:24.870619 systemd-logind[1441]: Removed session 37. Mar 2 13:19:24.918623 sshd[4568]: Accepted publickey for core from 10.0.0.1 port 54278 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:19:24.921268 sshd[4568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:19:24.932160 systemd-logind[1441]: New session 38 of user core. Mar 2 13:19:24.949975 systemd[1]: Started session-38.scope - Session 38 of User core. Mar 2 13:19:25.857410 sshd[4568]: pam_unix(sshd:session): session closed for user core Mar 2 13:19:25.890041 systemd[1]: sshd@37-10.0.0.96:22-10.0.0.1:54278.service: Deactivated successfully. Mar 2 13:19:25.901338 systemd[1]: session-38.scope: Deactivated successfully. Mar 2 13:19:25.911720 systemd-logind[1441]: Session 38 logged out. Waiting for processes to exit. Mar 2 13:19:25.932870 systemd[1]: Started sshd@38-10.0.0.96:22-10.0.0.1:54280.service - OpenSSH per-connection server daemon (10.0.0.1:54280). Mar 2 13:19:25.939868 systemd-logind[1441]: Removed session 38. Mar 2 13:19:25.982944 kubelet[2581]: I0302 13:19:25.972352 2581 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b9e107c-328c-4cd0-ab60-66ddbd87ee7f" path="/var/lib/kubelet/pods/6b9e107c-328c-4cd0-ab60-66ddbd87ee7f/volumes" Mar 2 13:19:25.982944 kubelet[2581]: I0302 13:19:25.977207 2581 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7bee1b3-a132-4cc0-a032-d16919f8e65b" path="/var/lib/kubelet/pods/a7bee1b3-a132-4cc0-a032-d16919f8e65b/volumes" Mar 2 13:19:26.035964 systemd[1]: Created slice kubepods-burstable-pod44675cff_1855_4fb2_ae60_be322969dfd0.slice - libcontainer container kubepods-burstable-pod44675cff_1855_4fb2_ae60_be322969dfd0.slice. Mar 2 13:19:26.042247 sshd[4581]: Accepted publickey for core from 10.0.0.1 port 54280 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:19:26.044163 sshd[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:19:26.055227 systemd-logind[1441]: New session 39 of user core. Mar 2 13:19:26.067621 systemd[1]: Started session-39.scope - Session 39 of User core. Mar 2 13:19:26.086658 kubelet[2581]: I0302 13:19:26.085773 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44675cff-1855-4fb2-ae60-be322969dfd0-host-proc-sys-kernel\") pod \"cilium-ql7bp\" (UID: \"44675cff-1855-4fb2-ae60-be322969dfd0\") " pod="kube-system/cilium-ql7bp" Mar 2 13:19:26.087324 kubelet[2581]: I0302 13:19:26.087232 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbt9x\" (UniqueName: \"kubernetes.io/projected/44675cff-1855-4fb2-ae60-be322969dfd0-kube-api-access-qbt9x\") pod \"cilium-ql7bp\" (UID: \"44675cff-1855-4fb2-ae60-be322969dfd0\") " pod="kube-system/cilium-ql7bp" Mar 2 13:19:26.089251 kubelet[2581]: I0302 13:19:26.087938 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44675cff-1855-4fb2-ae60-be322969dfd0-hubble-tls\") pod \"cilium-ql7bp\" (UID: \"44675cff-1855-4fb2-ae60-be322969dfd0\") " pod="kube-system/cilium-ql7bp" Mar 2 13:19:26.089792 kubelet[2581]: I0302 13:19:26.089721 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44675cff-1855-4fb2-ae60-be322969dfd0-bpf-maps\") pod \"cilium-ql7bp\" (UID: \"44675cff-1855-4fb2-ae60-be322969dfd0\") " pod="kube-system/cilium-ql7bp" Mar 2 13:19:26.089872 kubelet[2581]: I0302 13:19:26.089806 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44675cff-1855-4fb2-ae60-be322969dfd0-etc-cni-netd\") pod \"cilium-ql7bp\" (UID: \"44675cff-1855-4fb2-ae60-be322969dfd0\") " pod="kube-system/cilium-ql7bp" Mar 2 13:19:26.089872 kubelet[2581]: I0302 13:19:26.089835 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44675cff-1855-4fb2-ae60-be322969dfd0-lib-modules\") pod \"cilium-ql7bp\" (UID: \"44675cff-1855-4fb2-ae60-be322969dfd0\") " pod="kube-system/cilium-ql7bp" Mar 2 13:19:26.089872 kubelet[2581]: I0302 13:19:26.089864 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44675cff-1855-4fb2-ae60-be322969dfd0-hostproc\") pod \"cilium-ql7bp\" (UID: \"44675cff-1855-4fb2-ae60-be322969dfd0\") " pod="kube-system/cilium-ql7bp" Mar 2 13:19:26.089984 kubelet[2581]: I0302 13:19:26.089893 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44675cff-1855-4fb2-ae60-be322969dfd0-cilium-run\") pod \"cilium-ql7bp\" (UID: \"44675cff-1855-4fb2-ae60-be322969dfd0\") " pod="kube-system/cilium-ql7bp" Mar 2 13:19:26.090122 kubelet[2581]: I0302 13:19:26.090040 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44675cff-1855-4fb2-ae60-be322969dfd0-cilium-cgroup\") pod \"cilium-ql7bp\" (UID: \"44675cff-1855-4fb2-ae60-be322969dfd0\") " pod="kube-system/cilium-ql7bp" Mar 2 13:19:26.090193 kubelet[2581]: I0302 13:19:26.090131 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44675cff-1855-4fb2-ae60-be322969dfd0-clustermesh-secrets\") pod \"cilium-ql7bp\" (UID: \"44675cff-1855-4fb2-ae60-be322969dfd0\") " pod="kube-system/cilium-ql7bp" Mar 2 13:19:26.090193 kubelet[2581]: I0302 13:19:26.090178 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44675cff-1855-4fb2-ae60-be322969dfd0-cni-path\") pod \"cilium-ql7bp\" (UID: \"44675cff-1855-4fb2-ae60-be322969dfd0\") " pod="kube-system/cilium-ql7bp" Mar 2 13:19:26.090273 kubelet[2581]: I0302 13:19:26.090248 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44675cff-1855-4fb2-ae60-be322969dfd0-xtables-lock\") pod \"cilium-ql7bp\" (UID: \"44675cff-1855-4fb2-ae60-be322969dfd0\") " pod="kube-system/cilium-ql7bp" Mar 2 13:19:26.090318 kubelet[2581]: I0302 13:19:26.090277 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44675cff-1855-4fb2-ae60-be322969dfd0-cilium-config-path\") pod \"cilium-ql7bp\" (UID: \"44675cff-1855-4fb2-ae60-be322969dfd0\") " pod="kube-system/cilium-ql7bp" Mar 2 13:19:26.090318 kubelet[2581]: I0302 13:19:26.090304 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/44675cff-1855-4fb2-ae60-be322969dfd0-cilium-ipsec-secrets\") pod \"cilium-ql7bp\" (UID: \"44675cff-1855-4fb2-ae60-be322969dfd0\") " pod="kube-system/cilium-ql7bp" Mar 2 13:19:26.090394 kubelet[2581]: I0302 13:19:26.090326 2581 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44675cff-1855-4fb2-ae60-be322969dfd0-host-proc-sys-net\") pod \"cilium-ql7bp\" (UID: \"44675cff-1855-4fb2-ae60-be322969dfd0\") " pod="kube-system/cilium-ql7bp" Mar 2 13:19:26.143110 sshd[4581]: pam_unix(sshd:session): session closed for user core Mar 2 13:19:26.159155 systemd[1]: sshd@38-10.0.0.96:22-10.0.0.1:54280.service: Deactivated successfully. Mar 2 13:19:26.161975 systemd[1]: session-39.scope: Deactivated successfully. Mar 2 13:19:26.166378 systemd-logind[1441]: Session 39 logged out. Waiting for processes to exit. Mar 2 13:19:26.186304 systemd[1]: Started sshd@39-10.0.0.96:22-10.0.0.1:54288.service - OpenSSH per-connection server daemon (10.0.0.1:54288). Mar 2 13:19:26.189904 systemd-logind[1441]: Removed session 39. Mar 2 13:19:26.248854 sshd[4589]: Accepted publickey for core from 10.0.0.1 port 54288 ssh2: RSA SHA256:nX7kcAjijV+5vNj4MtdzokA/U/H37jMwDhMaWkkF8FM Mar 2 13:19:26.253004 sshd[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 2 13:19:26.266277 systemd-logind[1441]: New session 40 of user core. Mar 2 13:19:26.278605 systemd[1]: Started session-40.scope - Session 40 of User core. Mar 2 13:19:26.350237 kubelet[2581]: E0302 13:19:26.349270 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:19:26.350376 containerd[1459]: time="2026-03-02T13:19:26.350193126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ql7bp,Uid:44675cff-1855-4fb2-ae60-be322969dfd0,Namespace:kube-system,Attempt:0,}" Mar 2 13:19:26.437689 kubelet[2581]: E0302 13:19:26.437316 2581 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Mar 2 13:19:26.467725 containerd[1459]: time="2026-03-02T13:19:26.466723079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 2 13:19:26.467725 containerd[1459]: time="2026-03-02T13:19:26.466872326Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 2 13:19:26.467725 containerd[1459]: time="2026-03-02T13:19:26.466887653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:19:26.467725 containerd[1459]: time="2026-03-02T13:19:26.467020469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 2 13:19:26.512067 systemd[1]: Started cri-containerd-b01cb433d2460bb9322da42523b545e27e15f197e23cdc4f81adca0fa45ba9ee.scope - libcontainer container b01cb433d2460bb9322da42523b545e27e15f197e23cdc4f81adca0fa45ba9ee. Mar 2 13:19:26.590864 containerd[1459]: time="2026-03-02T13:19:26.590609726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ql7bp,Uid:44675cff-1855-4fb2-ae60-be322969dfd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b01cb433d2460bb9322da42523b545e27e15f197e23cdc4f81adca0fa45ba9ee\"" Mar 2 13:19:26.592526 kubelet[2581]: E0302 13:19:26.592486 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:19:26.605670 containerd[1459]: time="2026-03-02T13:19:26.605503132Z" level=info msg="CreateContainer within sandbox \"b01cb433d2460bb9322da42523b545e27e15f197e23cdc4f81adca0fa45ba9ee\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Mar 2 13:19:26.644998 containerd[1459]: time="2026-03-02T13:19:26.644865278Z" level=info msg="CreateContainer within sandbox \"b01cb433d2460bb9322da42523b545e27e15f197e23cdc4f81adca0fa45ba9ee\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6757ae6d6528a74d63bff3dca28acba893b8967a9dde21581bcf77bd44419a8f\"" Mar 2 13:19:26.646105 containerd[1459]: time="2026-03-02T13:19:26.645951865Z" level=info msg="StartContainer for \"6757ae6d6528a74d63bff3dca28acba893b8967a9dde21581bcf77bd44419a8f\"" Mar 2 13:19:26.708534 systemd[1]: Started cri-containerd-6757ae6d6528a74d63bff3dca28acba893b8967a9dde21581bcf77bd44419a8f.scope - libcontainer container 6757ae6d6528a74d63bff3dca28acba893b8967a9dde21581bcf77bd44419a8f. Mar 2 13:19:26.777838 containerd[1459]: time="2026-03-02T13:19:26.777345242Z" level=info msg="StartContainer for \"6757ae6d6528a74d63bff3dca28acba893b8967a9dde21581bcf77bd44419a8f\" returns successfully" Mar 2 13:19:26.798773 systemd[1]: cri-containerd-6757ae6d6528a74d63bff3dca28acba893b8967a9dde21581bcf77bd44419a8f.scope: Deactivated successfully. Mar 2 13:19:26.898912 containerd[1459]: time="2026-03-02T13:19:26.898744526Z" level=info msg="shim disconnected" id=6757ae6d6528a74d63bff3dca28acba893b8967a9dde21581bcf77bd44419a8f namespace=k8s.io Mar 2 13:19:26.898912 containerd[1459]: time="2026-03-02T13:19:26.898842602Z" level=warning msg="cleaning up after shim disconnected" id=6757ae6d6528a74d63bff3dca28acba893b8967a9dde21581bcf77bd44419a8f namespace=k8s.io Mar 2 13:19:26.898912 containerd[1459]: time="2026-03-02T13:19:26.898860563Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:19:26.933283 containerd[1459]: time="2026-03-02T13:19:26.933221888Z" level=warning msg="cleanup warnings time=\"2026-03-02T13:19:26Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 2 13:19:27.292439 kubelet[2581]: E0302 13:19:27.292031 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:19:27.311331 containerd[1459]: time="2026-03-02T13:19:27.311271862Z" level=info msg="CreateContainer within sandbox \"b01cb433d2460bb9322da42523b545e27e15f197e23cdc4f81adca0fa45ba9ee\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Mar 2 13:19:27.363401 containerd[1459]: time="2026-03-02T13:19:27.362767807Z" level=info msg="CreateContainer within sandbox \"b01cb433d2460bb9322da42523b545e27e15f197e23cdc4f81adca0fa45ba9ee\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9079d9f88c03c97d598f92648145fc0ec59ec1f121761f7de40b34a88b54df5e\"" Mar 2 13:19:27.368653 containerd[1459]: time="2026-03-02T13:19:27.366038230Z" level=info msg="StartContainer for \"9079d9f88c03c97d598f92648145fc0ec59ec1f121761f7de40b34a88b54df5e\"" Mar 2 13:19:27.427898 systemd[1]: Started cri-containerd-9079d9f88c03c97d598f92648145fc0ec59ec1f121761f7de40b34a88b54df5e.scope - libcontainer container 9079d9f88c03c97d598f92648145fc0ec59ec1f121761f7de40b34a88b54df5e. Mar 2 13:19:27.524636 containerd[1459]: time="2026-03-02T13:19:27.524436146Z" level=info msg="StartContainer for \"9079d9f88c03c97d598f92648145fc0ec59ec1f121761f7de40b34a88b54df5e\" returns successfully" Mar 2 13:19:27.539347 systemd[1]: cri-containerd-9079d9f88c03c97d598f92648145fc0ec59ec1f121761f7de40b34a88b54df5e.scope: Deactivated successfully. Mar 2 13:19:27.620324 containerd[1459]: time="2026-03-02T13:19:27.618823369Z" level=info msg="shim disconnected" id=9079d9f88c03c97d598f92648145fc0ec59ec1f121761f7de40b34a88b54df5e namespace=k8s.io Mar 2 13:19:27.620324 containerd[1459]: time="2026-03-02T13:19:27.618930982Z" level=warning msg="cleaning up after shim disconnected" id=9079d9f88c03c97d598f92648145fc0ec59ec1f121761f7de40b34a88b54df5e namespace=k8s.io Mar 2 13:19:27.620324 containerd[1459]: time="2026-03-02T13:19:27.618949405Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:19:28.206308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9079d9f88c03c97d598f92648145fc0ec59ec1f121761f7de40b34a88b54df5e-rootfs.mount: Deactivated successfully. Mar 2 13:19:28.301418 kubelet[2581]: E0302 13:19:28.301297 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:19:28.313047 containerd[1459]: time="2026-03-02T13:19:28.312980020Z" level=info msg="CreateContainer within sandbox \"b01cb433d2460bb9322da42523b545e27e15f197e23cdc4f81adca0fa45ba9ee\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Mar 2 13:19:28.349751 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3710755878.mount: Deactivated successfully. Mar 2 13:19:28.360984 containerd[1459]: time="2026-03-02T13:19:28.360868033Z" level=info msg="CreateContainer within sandbox \"b01cb433d2460bb9322da42523b545e27e15f197e23cdc4f81adca0fa45ba9ee\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"903dbbde1853e0adc50b163118f82ac7f985ebed6d257b39d702c41c282741d0\"" Mar 2 13:19:28.362869 containerd[1459]: time="2026-03-02T13:19:28.362763290Z" level=info msg="StartContainer for \"903dbbde1853e0adc50b163118f82ac7f985ebed6d257b39d702c41c282741d0\"" Mar 2 13:19:28.439853 systemd[1]: Started cri-containerd-903dbbde1853e0adc50b163118f82ac7f985ebed6d257b39d702c41c282741d0.scope - libcontainer container 903dbbde1853e0adc50b163118f82ac7f985ebed6d257b39d702c41c282741d0. Mar 2 13:19:28.515439 containerd[1459]: time="2026-03-02T13:19:28.515205633Z" level=info msg="StartContainer for \"903dbbde1853e0adc50b163118f82ac7f985ebed6d257b39d702c41c282741d0\" returns successfully" Mar 2 13:19:28.538312 systemd[1]: cri-containerd-903dbbde1853e0adc50b163118f82ac7f985ebed6d257b39d702c41c282741d0.scope: Deactivated successfully. Mar 2 13:19:28.593767 containerd[1459]: time="2026-03-02T13:19:28.593659668Z" level=info msg="shim disconnected" id=903dbbde1853e0adc50b163118f82ac7f985ebed6d257b39d702c41c282741d0 namespace=k8s.io Mar 2 13:19:28.593767 containerd[1459]: time="2026-03-02T13:19:28.593728120Z" level=warning msg="cleaning up after shim disconnected" id=903dbbde1853e0adc50b163118f82ac7f985ebed6d257b39d702c41c282741d0 namespace=k8s.io Mar 2 13:19:28.593767 containerd[1459]: time="2026-03-02T13:19:28.593741514Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:19:29.024590 kubelet[2581]: I0302 13:19:29.023831 2581 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-02T13:19:29Z","lastTransitionTime":"2026-03-02T13:19:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Mar 2 13:19:29.204992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-903dbbde1853e0adc50b163118f82ac7f985ebed6d257b39d702c41c282741d0-rootfs.mount: Deactivated successfully. Mar 2 13:19:29.325790 kubelet[2581]: E0302 13:19:29.313640 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:19:29.334512 containerd[1459]: time="2026-03-02T13:19:29.334345190Z" level=info msg="CreateContainer within sandbox \"b01cb433d2460bb9322da42523b545e27e15f197e23cdc4f81adca0fa45ba9ee\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Mar 2 13:19:29.383523 containerd[1459]: time="2026-03-02T13:19:29.381810672Z" level=info msg="CreateContainer within sandbox \"b01cb433d2460bb9322da42523b545e27e15f197e23cdc4f81adca0fa45ba9ee\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"666475b89f9edb6b9472e2ae9f70116a3597fd3f7d3676abe7b2501656030674\"" Mar 2 13:19:29.385835 containerd[1459]: time="2026-03-02T13:19:29.385681803Z" level=info msg="StartContainer for \"666475b89f9edb6b9472e2ae9f70116a3597fd3f7d3676abe7b2501656030674\"" Mar 2 13:19:29.472272 systemd[1]: Started cri-containerd-666475b89f9edb6b9472e2ae9f70116a3597fd3f7d3676abe7b2501656030674.scope - libcontainer container 666475b89f9edb6b9472e2ae9f70116a3597fd3f7d3676abe7b2501656030674. Mar 2 13:19:29.511360 systemd[1]: cri-containerd-666475b89f9edb6b9472e2ae9f70116a3597fd3f7d3676abe7b2501656030674.scope: Deactivated successfully. Mar 2 13:19:29.518161 containerd[1459]: time="2026-03-02T13:19:29.517903810Z" level=info msg="StartContainer for \"666475b89f9edb6b9472e2ae9f70116a3597fd3f7d3676abe7b2501656030674\" returns successfully" Mar 2 13:19:29.567318 containerd[1459]: time="2026-03-02T13:19:29.567032982Z" level=info msg="shim disconnected" id=666475b89f9edb6b9472e2ae9f70116a3597fd3f7d3676abe7b2501656030674 namespace=k8s.io Mar 2 13:19:29.567318 containerd[1459]: time="2026-03-02T13:19:29.567120488Z" level=warning msg="cleaning up after shim disconnected" id=666475b89f9edb6b9472e2ae9f70116a3597fd3f7d3676abe7b2501656030674 namespace=k8s.io Mar 2 13:19:29.567318 containerd[1459]: time="2026-03-02T13:19:29.567133171Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 2 13:19:30.207930 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-666475b89f9edb6b9472e2ae9f70116a3597fd3f7d3676abe7b2501656030674-rootfs.mount: Deactivated successfully. Mar 2 13:19:30.331179 kubelet[2581]: E0302 13:19:30.330708 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:19:30.361056 containerd[1459]: time="2026-03-02T13:19:30.360302691Z" level=info msg="CreateContainer within sandbox \"b01cb433d2460bb9322da42523b545e27e15f197e23cdc4f81adca0fa45ba9ee\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Mar 2 13:19:30.396632 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3393324422.mount: Deactivated successfully. Mar 2 13:19:30.404978 containerd[1459]: time="2026-03-02T13:19:30.404818666Z" level=info msg="CreateContainer within sandbox \"b01cb433d2460bb9322da42523b545e27e15f197e23cdc4f81adca0fa45ba9ee\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9789b561328b90df64600275063e82746f10f0891d54c642328dcf8c0a40a6a6\"" Mar 2 13:19:30.408299 containerd[1459]: time="2026-03-02T13:19:30.406323051Z" level=info msg="StartContainer for \"9789b561328b90df64600275063e82746f10f0891d54c642328dcf8c0a40a6a6\"" Mar 2 13:19:30.477951 systemd[1]: Started cri-containerd-9789b561328b90df64600275063e82746f10f0891d54c642328dcf8c0a40a6a6.scope - libcontainer container 9789b561328b90df64600275063e82746f10f0891d54c642328dcf8c0a40a6a6. Mar 2 13:19:30.529398 containerd[1459]: time="2026-03-02T13:19:30.529028695Z" level=info msg="StartContainer for \"9789b561328b90df64600275063e82746f10f0891d54c642328dcf8c0a40a6a6\" returns successfully" Mar 2 13:19:31.361914 kubelet[2581]: E0302 13:19:31.360514 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:19:31.365671 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Mar 2 13:19:31.404027 kubelet[2581]: I0302 13:19:31.403928 2581 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-ql7bp" podStartSLOduration=6.403909309 podStartE2EDuration="6.403909309s" podCreationTimestamp="2026-03-02 13:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-02 13:19:31.402941674 +0000 UTC m=+175.763219553" watchObservedRunningTime="2026-03-02 13:19:31.403909309 +0000 UTC m=+175.764187159" Mar 2 13:19:31.922370 kubelet[2581]: E0302 13:19:31.920670 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:19:32.368292 kubelet[2581]: E0302 13:19:32.365703 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:19:33.069899 systemd[1]: run-containerd-runc-k8s.io-9789b561328b90df64600275063e82746f10f0891d54c642328dcf8c0a40a6a6-runc.UefQij.mount: Deactivated successfully. Mar 2 13:19:35.884516 containerd[1459]: time="2026-03-02T13:19:35.884423790Z" level=info msg="StopPodSandbox for \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\"" Mar 2 13:19:35.885716 containerd[1459]: time="2026-03-02T13:19:35.884647233Z" level=info msg="TearDown network for sandbox \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\" successfully" Mar 2 13:19:35.885716 containerd[1459]: time="2026-03-02T13:19:35.884671236Z" level=info msg="StopPodSandbox for \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\" returns successfully" Mar 2 13:19:35.885716 containerd[1459]: time="2026-03-02T13:19:35.885080834Z" level=info msg="RemovePodSandbox for \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\"" Mar 2 13:19:35.885716 containerd[1459]: time="2026-03-02T13:19:35.885112381Z" level=info msg="Forcibly stopping sandbox \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\"" Mar 2 13:19:35.885716 containerd[1459]: time="2026-03-02T13:19:35.885182157Z" level=info msg="TearDown network for sandbox \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\" successfully" Mar 2 13:19:35.898924 containerd[1459]: time="2026-03-02T13:19:35.898850307Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:19:35.899126 containerd[1459]: time="2026-03-02T13:19:35.898955496Z" level=info msg="RemovePodSandbox \"9f7247ee64d35724cd2d58572d627c01d1b0dd9ba2e5a8e6042b6571889d8bea\" returns successfully" Mar 2 13:19:35.899926 containerd[1459]: time="2026-03-02T13:19:35.899873074Z" level=info msg="StopPodSandbox for \"84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a\"" Mar 2 13:19:35.900090 containerd[1459]: time="2026-03-02T13:19:35.900046647Z" level=info msg="TearDown network for sandbox \"84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a\" successfully" Mar 2 13:19:35.900144 containerd[1459]: time="2026-03-02T13:19:35.900088773Z" level=info msg="StopPodSandbox for \"84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a\" returns successfully" Mar 2 13:19:35.902626 containerd[1459]: time="2026-03-02T13:19:35.900675394Z" level=info msg="RemovePodSandbox for \"84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a\"" Mar 2 13:19:35.902626 containerd[1459]: time="2026-03-02T13:19:35.900705458Z" level=info msg="Forcibly stopping sandbox \"84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a\"" Mar 2 13:19:35.902626 containerd[1459]: time="2026-03-02T13:19:35.900780203Z" level=info msg="TearDown network for sandbox \"84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a\" successfully" Mar 2 13:19:35.909807 containerd[1459]: time="2026-03-02T13:19:35.909358202Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 2 13:19:35.909807 containerd[1459]: time="2026-03-02T13:19:35.909456580Z" level=info msg="RemovePodSandbox \"84de0ce78abcf71bb3a2db0d1acc60d86c8fde5ad817f7b5ec415b878a2f295a\" returns successfully" Mar 2 13:19:36.990760 systemd-networkd[1385]: lxc_health: Link UP Mar 2 13:19:37.004000 systemd-networkd[1385]: lxc_health: Gained carrier Mar 2 13:19:38.202947 systemd-networkd[1385]: lxc_health: Gained IPv6LL Mar 2 13:19:38.367093 kubelet[2581]: E0302 13:19:38.361240 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:19:38.393116 kubelet[2581]: E0302 13:19:38.393076 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:19:39.391833 kubelet[2581]: E0302 13:19:39.391051 2581 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 2 13:19:42.874050 sshd[4589]: pam_unix(sshd:session): session closed for user core Mar 2 13:19:42.903132 systemd[1]: sshd@39-10.0.0.96:22-10.0.0.1:54288.service: Deactivated successfully. Mar 2 13:19:42.910928 systemd[1]: session-40.scope: Deactivated successfully. Mar 2 13:19:42.915788 systemd-logind[1441]: Session 40 logged out. Waiting for processes to exit. Mar 2 13:19:42.923479 systemd-logind[1441]: Removed session 40.