Apr 14 13:16:35.941667 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 14 13:16:35.941686 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 13:16:35.941695 kernel: BIOS-provided physical RAM map: Apr 14 13:16:35.941700 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 14 13:16:35.941704 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 14 13:16:35.941708 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 14 13:16:35.941713 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 14 13:16:35.941717 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 14 13:16:35.941721 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 14 13:16:35.941727 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 14 13:16:35.941731 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 14 13:16:35.941735 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 14 13:16:35.941739 kernel: NX (Execute Disable) protection: active Apr 14 13:16:35.941786 kernel: APIC: Static calls initialized Apr 14 13:16:35.941791 kernel: SMBIOS 2.8 present. Apr 14 13:16:35.941798 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 14 13:16:35.941803 kernel: Hypervisor detected: KVM Apr 14 13:16:35.941808 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 14 13:16:35.941812 kernel: kvm-clock: using sched offset of 4305030536 cycles Apr 14 13:16:35.941818 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 14 13:16:35.941822 kernel: tsc: Detected 2793.438 MHz processor Apr 14 13:16:35.941828 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 14 13:16:35.941833 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 14 13:16:35.941837 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 14 13:16:35.941844 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 14 13:16:35.941849 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 14 13:16:35.941854 kernel: Using GB pages for direct mapping Apr 14 13:16:35.941858 kernel: ACPI: Early table checksum verification disabled Apr 14 13:16:35.941863 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 14 13:16:35.941868 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:16:35.941873 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:16:35.941877 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:16:35.941882 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 14 13:16:35.941888 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:16:35.941893 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:16:35.941897 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:16:35.941902 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 13:16:35.941907 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 14 13:16:35.941911 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 14 13:16:35.941916 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 14 13:16:35.941925 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 14 13:16:35.941929 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 14 13:16:35.941934 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 14 13:16:35.941939 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 14 13:16:35.941944 kernel: No NUMA configuration found Apr 14 13:16:35.941949 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 14 13:16:35.941954 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 14 13:16:35.941960 kernel: Zone ranges: Apr 14 13:16:35.941965 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 14 13:16:35.941970 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 14 13:16:35.941975 kernel: Normal empty Apr 14 13:16:35.941980 kernel: Movable zone start for each node Apr 14 13:16:35.941985 kernel: Early memory node ranges Apr 14 13:16:35.941990 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 14 13:16:35.941994 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 14 13:16:35.941999 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 14 13:16:35.942004 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 14 13:16:35.942011 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 14 13:16:35.942016 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 14 13:16:35.942021 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 14 13:16:35.942025 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 14 13:16:35.942030 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 14 13:16:35.942035 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 14 13:16:35.942040 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 14 13:16:35.942045 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 14 13:16:35.942050 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 14 13:16:35.942056 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 14 13:16:35.942061 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 14 13:16:35.942066 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 14 13:16:35.942071 kernel: TSC deadline timer available Apr 14 13:16:35.942076 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 14 13:16:35.942081 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 14 13:16:35.942086 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 14 13:16:35.942091 kernel: kvm-guest: setup PV sched yield Apr 14 13:16:35.942096 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 14 13:16:35.942102 kernel: Booting paravirtualized kernel on KVM Apr 14 13:16:35.942108 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 14 13:16:35.942113 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 14 13:16:35.942118 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 14 13:16:35.942123 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 14 13:16:35.942128 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 14 13:16:35.942132 kernel: kvm-guest: PV spinlocks enabled Apr 14 13:16:35.942138 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 14 13:16:35.942143 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 13:16:35.942150 kernel: random: crng init done Apr 14 13:16:35.942155 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 14 13:16:35.942160 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 14 13:16:35.942165 kernel: Fallback order for Node 0: 0 Apr 14 13:16:35.942170 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 14 13:16:35.942175 kernel: Policy zone: DMA32 Apr 14 13:16:35.942180 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 14 13:16:35.942185 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137896K reserved, 0K cma-reserved) Apr 14 13:16:35.942192 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 14 13:16:35.942196 kernel: ftrace: allocating 37996 entries in 149 pages Apr 14 13:16:35.942201 kernel: ftrace: allocated 149 pages with 4 groups Apr 14 13:16:35.942206 kernel: Dynamic Preempt: voluntary Apr 14 13:16:35.942211 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 14 13:16:35.942217 kernel: rcu: RCU event tracing is enabled. Apr 14 13:16:35.942222 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 14 13:16:35.942227 kernel: Trampoline variant of Tasks RCU enabled. Apr 14 13:16:35.942232 kernel: Rude variant of Tasks RCU enabled. Apr 14 13:16:35.942237 kernel: Tracing variant of Tasks RCU enabled. Apr 14 13:16:35.942243 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 14 13:16:35.942248 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 14 13:16:35.942253 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 14 13:16:35.942258 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 14 13:16:35.942263 kernel: Console: colour VGA+ 80x25 Apr 14 13:16:35.942268 kernel: printk: console [ttyS0] enabled Apr 14 13:16:35.942273 kernel: ACPI: Core revision 20230628 Apr 14 13:16:35.942278 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 14 13:16:35.942283 kernel: APIC: Switch to symmetric I/O mode setup Apr 14 13:16:35.942289 kernel: x2apic enabled Apr 14 13:16:35.942294 kernel: APIC: Switched APIC routing to: physical x2apic Apr 14 13:16:35.942299 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 14 13:16:35.942304 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 14 13:16:35.942309 kernel: kvm-guest: setup PV IPIs Apr 14 13:16:35.942314 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 14 13:16:35.942319 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 13:16:35.942332 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 14 13:16:35.942338 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 14 13:16:35.942343 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 14 13:16:35.942348 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 14 13:16:35.942354 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 14 13:16:35.942361 kernel: Spectre V2 : Mitigation: Retpolines Apr 14 13:16:35.942366 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 14 13:16:35.942372 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 14 13:16:35.942377 kernel: RETBleed: Vulnerable Apr 14 13:16:35.942384 kernel: Speculative Store Bypass: Vulnerable Apr 14 13:16:35.942390 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 14 13:16:35.942395 kernel: GDS: Unknown: Dependent on hypervisor status Apr 14 13:16:35.942401 kernel: active return thunk: its_return_thunk Apr 14 13:16:35.942406 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 14 13:16:35.942411 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 14 13:16:35.942417 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 14 13:16:35.942422 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 14 13:16:35.942428 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 14 13:16:35.942435 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 14 13:16:35.942440 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 14 13:16:35.942446 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 14 13:16:35.942451 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 14 13:16:35.942456 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 14 13:16:35.942462 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 14 13:16:35.942467 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 14 13:16:35.942473 kernel: Freeing SMP alternatives memory: 32K Apr 14 13:16:35.942478 kernel: pid_max: default: 32768 minimum: 301 Apr 14 13:16:35.942485 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 14 13:16:35.942490 kernel: landlock: Up and running. Apr 14 13:16:35.942495 kernel: SELinux: Initializing. Apr 14 13:16:35.942501 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 13:16:35.942506 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 13:16:35.942512 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 14 13:16:35.942517 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 13:16:35.942523 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 13:16:35.942528 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 13:16:35.942535 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 14 13:16:35.942541 kernel: signal: max sigframe size: 3632 Apr 14 13:16:35.942546 kernel: rcu: Hierarchical SRCU implementation. Apr 14 13:16:35.942552 kernel: rcu: Max phase no-delay instances is 400. Apr 14 13:16:35.942579 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 14 13:16:35.942585 kernel: smp: Bringing up secondary CPUs ... Apr 14 13:16:35.942590 kernel: smpboot: x86: Booting SMP configuration: Apr 14 13:16:35.942596 kernel: .... node #0, CPUs: #1 #2 #3 Apr 14 13:16:35.942601 kernel: smp: Brought up 1 node, 4 CPUs Apr 14 13:16:35.942608 kernel: smpboot: Max logical packages: 1 Apr 14 13:16:35.942614 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 14 13:16:35.942619 kernel: devtmpfs: initialized Apr 14 13:16:35.942625 kernel: x86/mm: Memory block size: 128MB Apr 14 13:16:35.942630 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 14 13:16:35.942636 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 14 13:16:35.942641 kernel: pinctrl core: initialized pinctrl subsystem Apr 14 13:16:35.942647 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 14 13:16:35.942652 kernel: audit: initializing netlink subsys (disabled) Apr 14 13:16:35.942659 kernel: audit: type=2000 audit(1776172594.565:1): state=initialized audit_enabled=0 res=1 Apr 14 13:16:35.942664 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 14 13:16:35.942670 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 14 13:16:35.942675 kernel: cpuidle: using governor menu Apr 14 13:16:35.942681 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 14 13:16:35.942686 kernel: dca service started, version 1.12.1 Apr 14 13:16:35.942691 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 14 13:16:35.942697 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 14 13:16:35.942702 kernel: PCI: Using configuration type 1 for base access Apr 14 13:16:35.942709 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 14 13:16:35.942715 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 14 13:16:35.942720 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 14 13:16:35.942726 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 14 13:16:35.942731 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 14 13:16:35.942737 kernel: ACPI: Added _OSI(Module Device) Apr 14 13:16:35.942831 kernel: ACPI: Added _OSI(Processor Device) Apr 14 13:16:35.942837 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 14 13:16:35.942842 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 14 13:16:35.942850 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 14 13:16:35.942856 kernel: ACPI: Interpreter enabled Apr 14 13:16:35.942861 kernel: ACPI: PM: (supports S0 S3 S5) Apr 14 13:16:35.942867 kernel: ACPI: Using IOAPIC for interrupt routing Apr 14 13:16:35.942872 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 14 13:16:35.942878 kernel: PCI: Using E820 reservations for host bridge windows Apr 14 13:16:35.942883 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 14 13:16:35.942888 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 14 13:16:35.943001 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 14 13:16:35.943068 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 14 13:16:35.943125 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 14 13:16:35.943132 kernel: PCI host bridge to bus 0000:00 Apr 14 13:16:35.943192 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 14 13:16:35.943242 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 14 13:16:35.943291 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 14 13:16:35.943341 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 14 13:16:35.943389 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 14 13:16:35.943437 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 14 13:16:35.943486 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 14 13:16:35.943581 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 14 13:16:35.943663 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 14 13:16:35.943723 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 14 13:16:35.943826 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 14 13:16:35.943884 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 14 13:16:35.943940 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 14 13:16:35.944002 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 14 13:16:35.944058 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 14 13:16:35.944113 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 14 13:16:35.944170 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 14 13:16:35.944231 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 14 13:16:35.944288 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 14 13:16:35.944345 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 14 13:16:35.944401 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 14 13:16:35.944464 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 14 13:16:35.944519 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 14 13:16:35.944602 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 14 13:16:35.944657 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 14 13:16:35.944713 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 14 13:16:35.944917 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 14 13:16:35.944976 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 14 13:16:35.945032 kernel: pci 0000:00:1f.0: quirk_ich7_lpc+0x0/0x180 took 14648 usecs Apr 14 13:16:35.945091 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 14 13:16:35.945151 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 14 13:16:35.945206 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 14 13:16:35.945265 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 14 13:16:35.945320 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 14 13:16:35.945327 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 14 13:16:35.945333 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 14 13:16:35.945338 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 14 13:16:35.945346 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 14 13:16:35.945351 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 14 13:16:35.945357 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 14 13:16:35.945363 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 14 13:16:35.945368 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 14 13:16:35.945373 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 14 13:16:35.945379 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 14 13:16:35.945384 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 14 13:16:35.945390 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 14 13:16:35.945396 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 14 13:16:35.945402 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 14 13:16:35.945407 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 14 13:16:35.945413 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 14 13:16:35.945419 kernel: iommu: Default domain type: Translated Apr 14 13:16:35.945424 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 14 13:16:35.945430 kernel: PCI: Using ACPI for IRQ routing Apr 14 13:16:35.945435 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 14 13:16:35.945441 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 14 13:16:35.945448 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 14 13:16:35.945502 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 14 13:16:35.945582 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 14 13:16:35.945640 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 14 13:16:35.945647 kernel: vgaarb: loaded Apr 14 13:16:35.945653 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 14 13:16:35.945659 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 14 13:16:35.945664 kernel: clocksource: Switched to clocksource kvm-clock Apr 14 13:16:35.945670 kernel: VFS: Disk quotas dquot_6.6.0 Apr 14 13:16:35.945677 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 14 13:16:35.945683 kernel: pnp: PnP ACPI init Apr 14 13:16:35.945781 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 14 13:16:35.945790 kernel: pnp: PnP ACPI: found 6 devices Apr 14 13:16:35.945795 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 14 13:16:35.945801 kernel: NET: Registered PF_INET protocol family Apr 14 13:16:35.945807 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 14 13:16:35.945813 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 14 13:16:35.945820 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 14 13:16:35.945826 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 14 13:16:35.945832 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 14 13:16:35.945837 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 14 13:16:35.945843 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 13:16:35.945849 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 13:16:35.945854 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 14 13:16:35.945860 kernel: NET: Registered PF_XDP protocol family Apr 14 13:16:35.945916 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 14 13:16:35.945970 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 14 13:16:35.946019 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 14 13:16:35.946068 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 14 13:16:35.946117 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 14 13:16:35.946166 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 14 13:16:35.946174 kernel: PCI: CLS 0 bytes, default 64 Apr 14 13:16:35.946180 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 14 13:16:35.946185 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 13:16:35.946193 kernel: Initialise system trusted keyrings Apr 14 13:16:35.946199 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 14 13:16:35.946204 kernel: Key type asymmetric registered Apr 14 13:16:35.946210 kernel: Asymmetric key parser 'x509' registered Apr 14 13:16:35.946215 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 14 13:16:35.946221 kernel: io scheduler mq-deadline registered Apr 14 13:16:35.946227 kernel: io scheduler kyber registered Apr 14 13:16:35.946232 kernel: io scheduler bfq registered Apr 14 13:16:35.946238 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 14 13:16:35.946246 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 14 13:16:35.946252 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 14 13:16:35.946258 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 14 13:16:35.946263 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 14 13:16:35.946269 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 14 13:16:35.946274 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 14 13:16:35.946280 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 14 13:16:35.946286 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 14 13:16:35.946381 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 14 13:16:35.946393 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 14 13:16:35.946446 kernel: rtc_cmos 00:04: registered as rtc0 Apr 14 13:16:35.946498 kernel: rtc_cmos 00:04: setting system clock to 2026-04-14T13:16:35 UTC (1776172595) Apr 14 13:16:35.946549 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 14 13:16:35.946581 kernel: intel_pstate: CPU model not supported Apr 14 13:16:35.946587 kernel: NET: Registered PF_INET6 protocol family Apr 14 13:16:35.946593 kernel: Segment Routing with IPv6 Apr 14 13:16:35.946598 kernel: In-situ OAM (IOAM) with IPv6 Apr 14 13:16:35.946606 kernel: NET: Registered PF_PACKET protocol family Apr 14 13:16:35.946612 kernel: Key type dns_resolver registered Apr 14 13:16:35.946617 kernel: IPI shorthand broadcast: enabled Apr 14 13:16:35.946623 kernel: sched_clock: Marking stable (1213017898, 212509616)->(1505132181, -79604667) Apr 14 13:16:35.946629 kernel: registered taskstats version 1 Apr 14 13:16:35.946634 kernel: Loading compiled-in X.509 certificates Apr 14 13:16:35.946640 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 14 13:16:35.946646 kernel: Key type .fscrypt registered Apr 14 13:16:35.946651 kernel: Key type fscrypt-provisioning registered Apr 14 13:16:35.946659 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 14 13:16:35.946664 kernel: ima: Allocated hash algorithm: sha1 Apr 14 13:16:35.946670 kernel: ima: No architecture policies found Apr 14 13:16:35.946675 kernel: clk: Disabling unused clocks Apr 14 13:16:35.946681 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 14 13:16:35.946686 kernel: Write protecting the kernel read-only data: 36864k Apr 14 13:16:35.946692 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 14 13:16:35.946698 kernel: Run /init as init process Apr 14 13:16:35.946704 kernel: with arguments: Apr 14 13:16:35.946711 kernel: /init Apr 14 13:16:35.946716 kernel: with environment: Apr 14 13:16:35.946721 kernel: HOME=/ Apr 14 13:16:35.946727 kernel: TERM=linux Apr 14 13:16:35.946734 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 13:16:35.946777 systemd[1]: Detected virtualization kvm. Apr 14 13:16:35.946784 systemd[1]: Detected architecture x86-64. Apr 14 13:16:35.946790 systemd[1]: Running in initrd. Apr 14 13:16:35.946797 systemd[1]: No hostname configured, using default hostname. Apr 14 13:16:35.946803 systemd[1]: Hostname set to . Apr 14 13:16:35.946809 systemd[1]: Initializing machine ID from VM UUID. Apr 14 13:16:35.946815 systemd[1]: Queued start job for default target initrd.target. Apr 14 13:16:35.946821 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 13:16:35.946827 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 13:16:35.946833 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 14 13:16:35.946839 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 13:16:35.946847 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 14 13:16:35.946861 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 14 13:16:35.946869 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 14 13:16:35.946875 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 14 13:16:35.946881 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 13:16:35.946889 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 13:16:35.946895 systemd[1]: Reached target paths.target - Path Units. Apr 14 13:16:35.946901 systemd[1]: Reached target slices.target - Slice Units. Apr 14 13:16:35.946907 systemd[1]: Reached target swap.target - Swaps. Apr 14 13:16:35.946912 systemd[1]: Reached target timers.target - Timer Units. Apr 14 13:16:35.946918 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 13:16:35.946924 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 13:16:35.946930 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 14 13:16:35.946938 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 14 13:16:35.946944 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 13:16:35.946950 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 13:16:35.946956 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 13:16:35.946962 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 13:16:35.946968 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 14 13:16:35.946974 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 13:16:35.946980 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 14 13:16:35.946986 systemd[1]: Starting systemd-fsck-usr.service... Apr 14 13:16:35.946994 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 13:16:35.947000 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 13:16:35.947006 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:16:35.947027 systemd-journald[194]: Collecting audit messages is disabled. Apr 14 13:16:35.947045 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 14 13:16:35.947051 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 13:16:35.947059 systemd[1]: Finished systemd-fsck-usr.service. Apr 14 13:16:35.947068 systemd-journald[194]: Journal started Apr 14 13:16:35.947083 systemd-journald[194]: Runtime Journal (/run/log/journal/103656e0e3a0440cab729d703c839c2e) is 6.0M, max 48.4M, 42.3M free. Apr 14 13:16:35.951465 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 13:16:35.954108 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 13:16:35.958320 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 13:16:35.963403 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 13:16:35.965295 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 13:16:35.984186 systemd-modules-load[195]: Inserted module 'overlay' Apr 14 13:16:35.989847 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 13:16:35.993043 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 13:16:36.025860 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 14 13:16:36.028386 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 14 13:16:36.161347 kernel: Bridge firewalling registered Apr 14 13:16:36.029654 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 13:16:36.168902 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:16:36.185139 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 13:16:36.186231 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 13:16:36.200693 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:16:36.207103 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 13:16:36.222024 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 14 13:16:36.224424 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 13:16:36.236469 dracut-cmdline[230]: dracut-dracut-053 Apr 14 13:16:36.241677 dracut-cmdline[230]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 13:16:36.251325 systemd-resolved[231]: Positive Trust Anchors: Apr 14 13:16:36.251332 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 13:16:36.251356 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 13:16:36.253710 systemd-resolved[231]: Defaulting to hostname 'linux'. Apr 14 13:16:36.254468 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 13:16:36.255248 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 13:16:36.358889 kernel: SCSI subsystem initialized Apr 14 13:16:36.371881 kernel: Loading iSCSI transport class v2.0-870. Apr 14 13:16:36.383985 kernel: iscsi: registered transport (tcp) Apr 14 13:16:36.403478 kernel: iscsi: registered transport (qla4xxx) Apr 14 13:16:36.403548 kernel: QLogic iSCSI HBA Driver Apr 14 13:16:36.443551 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 14 13:16:36.453413 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 14 13:16:36.481441 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 14 13:16:36.481508 kernel: device-mapper: uevent: version 1.0.3 Apr 14 13:16:36.481939 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 14 13:16:36.528906 kernel: raid6: avx512x4 gen() 27695 MB/s Apr 14 13:16:36.546888 kernel: raid6: avx512x2 gen() 43066 MB/s Apr 14 13:16:36.564876 kernel: raid6: avx512x1 gen() 41809 MB/s Apr 14 13:16:36.582888 kernel: raid6: avx2x4 gen() 36395 MB/s Apr 14 13:16:36.600866 kernel: raid6: avx2x2 gen() 33843 MB/s Apr 14 13:16:36.619479 kernel: raid6: avx2x1 gen() 27585 MB/s Apr 14 13:16:36.619555 kernel: raid6: using algorithm avx512x2 gen() 43066 MB/s Apr 14 13:16:36.638676 kernel: raid6: .... xor() 29843 MB/s, rmw enabled Apr 14 13:16:36.638857 kernel: raid6: using avx512x2 recovery algorithm Apr 14 13:16:36.659857 kernel: xor: automatically using best checksumming function avx Apr 14 13:16:36.810864 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 14 13:16:36.822332 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 14 13:16:36.837123 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 13:16:36.847918 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 14 13:16:36.853971 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 13:16:36.877234 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 14 13:16:36.889678 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Apr 14 13:16:36.933633 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 13:16:36.946088 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 13:16:36.987659 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 13:16:36.997954 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 14 13:16:37.006671 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 14 13:16:37.014738 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 13:16:37.022254 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 13:16:37.031044 kernel: cryptd: max_cpu_qlen set to 1000 Apr 14 13:16:37.031064 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 14 13:16:37.025219 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 13:16:37.042242 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 14 13:16:37.042430 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 14 13:16:37.042439 kernel: AVX2 version of gcm_enc/dec engaged. Apr 14 13:16:37.044691 kernel: GPT:9289727 != 19775487 Apr 14 13:16:37.044723 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 14 13:16:37.048373 kernel: GPT:9289727 != 19775487 Apr 14 13:16:37.048418 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 14 13:16:37.049991 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:16:37.050024 kernel: AES CTR mode by8 optimization enabled Apr 14 13:16:37.057942 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 14 13:16:37.070219 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 13:16:37.070334 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:16:37.078526 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 13:16:37.106237 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 13:16:37.121831 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (472) Apr 14 13:16:37.121870 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (477) Apr 14 13:16:37.106426 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:16:37.119337 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:16:37.136813 kernel: libata version 3.00 loaded. Apr 14 13:16:37.138996 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:16:37.145196 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 14 13:16:37.157886 kernel: ahci 0000:00:1f.2: version 3.0 Apr 14 13:16:37.158173 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 14 13:16:37.166999 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 14 13:16:37.167308 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 14 13:16:37.177192 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 14 13:16:37.382482 kernel: scsi host0: ahci Apr 14 13:16:37.382904 kernel: scsi host1: ahci Apr 14 13:16:37.382998 kernel: scsi host2: ahci Apr 14 13:16:37.383083 kernel: scsi host3: ahci Apr 14 13:16:37.383174 kernel: scsi host4: ahci Apr 14 13:16:37.383256 kernel: scsi host5: ahci Apr 14 13:16:37.383334 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 14 13:16:37.383344 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 14 13:16:37.383353 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 14 13:16:37.383361 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 14 13:16:37.383370 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 14 13:16:37.383380 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 14 13:16:37.389232 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:16:37.399323 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 14 13:16:37.402529 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 13:16:37.410507 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 14 13:16:37.413853 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 14 13:16:37.433798 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 14 13:16:37.437205 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 13:16:37.448483 disk-uuid[567]: Primary Header is updated. Apr 14 13:16:37.448483 disk-uuid[567]: Secondary Entries is updated. Apr 14 13:16:37.448483 disk-uuid[567]: Secondary Header is updated. Apr 14 13:16:37.456798 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:16:37.462847 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:16:37.471447 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:16:37.522260 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 14 13:16:37.522322 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 14 13:16:37.524921 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 14 13:16:37.529855 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 14 13:16:37.529936 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 14 13:16:37.534814 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 14 13:16:37.534859 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 14 13:16:37.536895 kernel: ata3.00: applying bridge limits Apr 14 13:16:37.538509 kernel: ata3.00: configured for UDMA/100 Apr 14 13:16:37.543848 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 14 13:16:37.589972 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 14 13:16:37.590227 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 14 13:16:37.605889 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 14 13:16:38.456854 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 13:16:38.458125 disk-uuid[568]: The operation has completed successfully. Apr 14 13:16:38.486400 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 14 13:16:38.486543 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 14 13:16:38.525211 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 14 13:16:38.534123 sh[592]: Success Apr 14 13:16:38.550794 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 14 13:16:38.603906 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 14 13:16:38.629249 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 14 13:16:38.635219 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 14 13:16:38.648186 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 14 13:16:38.648280 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:16:38.648304 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 14 13:16:38.653357 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 14 13:16:38.653463 kernel: BTRFS info (device dm-0): using free space tree Apr 14 13:16:38.667403 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 14 13:16:38.672899 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 14 13:16:38.686127 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 14 13:16:38.692139 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 14 13:16:38.705046 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:16:38.705092 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:16:38.705107 kernel: BTRFS info (device vda6): using free space tree Apr 14 13:16:38.711844 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 13:16:38.720116 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 14 13:16:38.725357 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:16:38.732208 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 14 13:16:38.749111 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 14 13:16:39.418279 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 13:16:39.432426 ignition[680]: Ignition 2.19.0 Apr 14 13:16:39.432455 ignition[680]: Stage: fetch-offline Apr 14 13:16:39.437025 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 13:16:39.432501 ignition[680]: no configs at "/usr/lib/ignition/base.d" Apr 14 13:16:39.432508 ignition[680]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:16:39.432921 ignition[680]: parsed url from cmdline: "" Apr 14 13:16:39.432924 ignition[680]: no config URL provided Apr 14 13:16:39.432927 ignition[680]: reading system config file "/usr/lib/ignition/user.ign" Apr 14 13:16:39.433119 ignition[680]: no config at "/usr/lib/ignition/user.ign" Apr 14 13:16:39.433174 ignition[680]: op(1): [started] loading QEMU firmware config module Apr 14 13:16:39.433179 ignition[680]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 14 13:16:39.577901 ignition[680]: op(1): [finished] loading QEMU firmware config module Apr 14 13:16:39.606693 systemd-networkd[779]: lo: Link UP Apr 14 13:16:39.606737 systemd-networkd[779]: lo: Gained carrier Apr 14 13:16:39.608288 systemd-networkd[779]: Enumeration completed Apr 14 13:16:39.609009 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 13:16:39.609605 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:16:39.609607 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 13:16:39.610554 systemd-networkd[779]: eth0: Link UP Apr 14 13:16:39.610557 systemd-networkd[779]: eth0: Gained carrier Apr 14 13:16:39.610562 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:16:39.613918 systemd[1]: Reached target network.target - Network. Apr 14 13:16:39.644092 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 13:16:39.865191 ignition[680]: parsing config with SHA512: 410bdcb564587af34591d997076343fa75836495c84265dd56882d630bd04c8a3ef6460fd59a3b6cb6447a5177627b03a62e50b3cbc9d636cbc2b9ce12fc9767 Apr 14 13:16:39.890084 unknown[680]: fetched base config from "system" Apr 14 13:16:39.890099 unknown[680]: fetched user config from "qemu" Apr 14 13:16:39.893341 systemd-resolved[231]: Detected conflict on linux IN A 10.0.0.13 Apr 14 13:16:39.893350 systemd-resolved[231]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Apr 14 13:16:39.899367 ignition[680]: fetch-offline: fetch-offline passed Apr 14 13:16:39.899601 ignition[680]: Ignition finished successfully Apr 14 13:16:39.907560 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 13:16:39.907998 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 14 13:16:39.927115 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 14 13:16:40.197450 ignition[784]: Ignition 2.19.0 Apr 14 13:16:40.197471 ignition[784]: Stage: kargs Apr 14 13:16:40.197687 ignition[784]: no configs at "/usr/lib/ignition/base.d" Apr 14 13:16:40.197694 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:16:40.209541 ignition[784]: kargs: kargs passed Apr 14 13:16:40.209639 ignition[784]: Ignition finished successfully Apr 14 13:16:40.222221 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 14 13:16:40.236480 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 14 13:16:40.344389 ignition[792]: Ignition 2.19.0 Apr 14 13:16:40.344414 ignition[792]: Stage: disks Apr 14 13:16:40.344644 ignition[792]: no configs at "/usr/lib/ignition/base.d" Apr 14 13:16:40.344652 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:16:40.345559 ignition[792]: disks: disks passed Apr 14 13:16:40.346363 ignition[792]: Ignition finished successfully Apr 14 13:16:40.361251 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 14 13:16:40.364205 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 14 13:16:40.366898 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 14 13:16:40.366973 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 13:16:40.366998 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 13:16:40.375062 systemd[1]: Reached target basic.target - Basic System. Apr 14 13:16:40.396070 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 14 13:16:40.420867 systemd-fsck[802]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 14 13:16:40.426272 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 14 13:16:40.441206 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 14 13:16:40.561977 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 14 13:16:40.565001 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 14 13:16:40.565990 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 14 13:16:40.589454 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 13:16:40.597708 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 14 13:16:40.605624 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (810) Apr 14 13:16:40.605655 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:16:40.600889 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 14 13:16:40.623286 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:16:40.623318 kernel: BTRFS info (device vda6): using free space tree Apr 14 13:16:40.623331 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 13:16:40.600947 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 14 13:16:40.600979 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 13:16:40.631394 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 13:16:40.654679 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 14 13:16:40.728037 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 14 13:16:40.813405 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Apr 14 13:16:40.820240 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Apr 14 13:16:40.827336 initrd-setup-root[848]: cut: /sysroot/etc/shadow: No such file or directory Apr 14 13:16:40.843646 initrd-setup-root[855]: cut: /sysroot/etc/gshadow: No such file or directory Apr 14 13:16:41.033043 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 14 13:16:41.054074 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 14 13:16:41.065015 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 14 13:16:41.072492 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 14 13:16:41.077495 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:16:41.110452 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 14 13:16:41.149539 ignition[924]: INFO : Ignition 2.19.0 Apr 14 13:16:41.149539 ignition[924]: INFO : Stage: mount Apr 14 13:16:41.154608 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 13:16:41.154608 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:16:41.154608 ignition[924]: INFO : mount: mount passed Apr 14 13:16:41.154608 ignition[924]: INFO : Ignition finished successfully Apr 14 13:16:41.169265 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 14 13:16:41.184165 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 14 13:16:41.582468 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 13:16:41.603948 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (937) Apr 14 13:16:41.604088 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 13:16:41.604144 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 13:16:41.609139 kernel: BTRFS info (device vda6): using free space tree Apr 14 13:16:41.615866 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 13:16:41.617105 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 13:16:41.674116 ignition[954]: INFO : Ignition 2.19.0 Apr 14 13:16:41.674116 ignition[954]: INFO : Stage: files Apr 14 13:16:41.678892 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 13:16:41.678892 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:16:41.685920 ignition[954]: DEBUG : files: compiled without relabeling support, skipping Apr 14 13:16:41.685970 systemd-networkd[779]: eth0: Gained IPv6LL Apr 14 13:16:41.691828 ignition[954]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 14 13:16:41.691828 ignition[954]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 14 13:16:41.698278 ignition[954]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 14 13:16:41.701632 ignition[954]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 14 13:16:41.704787 ignition[954]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 14 13:16:41.704787 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 14 13:16:41.704787 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 14 13:16:41.704787 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 13:16:41.704787 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 14 13:16:41.702186 unknown[954]: wrote ssh authorized keys file for user: core Apr 14 13:16:41.766278 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 14 13:16:41.863854 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 13:16:41.863854 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 14 13:16:41.863854 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Apr 14 13:16:42.151004 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Apr 14 13:16:42.825254 kernel: hrtimer: interrupt took 5284869 ns Apr 14 13:16:42.834788 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Apr 14 13:16:42.834788 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Apr 14 13:16:42.845294 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Apr 14 13:16:42.845294 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 14 13:16:42.845294 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 14 13:16:42.845294 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 13:16:42.845294 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 13:16:42.845294 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 13:16:42.845294 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 13:16:42.845294 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 13:16:42.845294 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 13:16:42.845294 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 13:16:42.845294 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 13:16:42.845294 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 13:16:42.845294 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 14 13:16:43.123023 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Apr 14 13:16:45.587817 ignition[954]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 13:16:45.593995 ignition[954]: INFO : files: op(d): [started] processing unit "containerd.service" Apr 14 13:16:45.599513 ignition[954]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 14 13:16:45.605596 ignition[954]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 14 13:16:45.605596 ignition[954]: INFO : files: op(d): [finished] processing unit "containerd.service" Apr 14 13:16:45.605596 ignition[954]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Apr 14 13:16:45.605596 ignition[954]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 13:16:45.605596 ignition[954]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 13:16:45.605596 ignition[954]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Apr 14 13:16:45.605596 ignition[954]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Apr 14 13:16:45.605596 ignition[954]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 13:16:45.605596 ignition[954]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 13:16:45.605596 ignition[954]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Apr 14 13:16:45.605596 ignition[954]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Apr 14 13:16:45.679711 ignition[954]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 13:16:45.690001 ignition[954]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 13:16:45.694239 ignition[954]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Apr 14 13:16:45.694239 ignition[954]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Apr 14 13:16:45.694239 ignition[954]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Apr 14 13:16:45.694239 ignition[954]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 14 13:16:45.694239 ignition[954]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 14 13:16:45.694239 ignition[954]: INFO : files: files passed Apr 14 13:16:45.694239 ignition[954]: INFO : Ignition finished successfully Apr 14 13:16:45.706096 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 14 13:16:45.731456 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 14 13:16:45.735659 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 14 13:16:45.754488 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 14 13:16:45.754805 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 14 13:16:45.763342 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Apr 14 13:16:45.812189 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 13:16:45.812189 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 14 13:16:45.818243 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 13:16:45.817599 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 13:16:45.832939 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 14 13:16:45.853434 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 14 13:16:45.889072 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 14 13:16:45.889198 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 14 13:16:45.894544 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 14 13:16:45.901462 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 14 13:16:45.904528 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 14 13:16:45.908672 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 14 13:16:45.942588 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 13:16:45.955381 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 14 13:16:45.976923 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 14 13:16:45.977146 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 13:16:45.988256 systemd[1]: Stopped target timers.target - Timer Units. Apr 14 13:16:45.995655 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 14 13:16:45.996910 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 13:16:46.006409 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 14 13:16:46.012345 systemd[1]: Stopped target basic.target - Basic System. Apr 14 13:16:46.016469 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 14 13:16:46.022005 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 13:16:46.027154 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 14 13:16:46.032377 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 14 13:16:46.034691 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 13:16:46.039721 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 14 13:16:46.047474 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 14 13:16:46.055925 systemd[1]: Stopped target swap.target - Swaps. Apr 14 13:16:46.056130 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 14 13:16:46.056312 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 14 13:16:46.067892 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 14 13:16:46.073374 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 13:16:46.078510 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 14 13:16:46.080965 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 13:16:46.086435 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 14 13:16:46.086966 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 14 13:16:46.094597 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 14 13:16:46.094896 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 13:16:46.101710 systemd[1]: Stopped target paths.target - Path Units. Apr 14 13:16:46.103836 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 14 13:16:46.106077 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 13:16:46.111845 systemd[1]: Stopped target slices.target - Slice Units. Apr 14 13:16:46.116216 systemd[1]: Stopped target sockets.target - Socket Units. Apr 14 13:16:46.121444 systemd[1]: iscsid.socket: Deactivated successfully. Apr 14 13:16:46.121665 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 13:16:46.126969 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 14 13:16:46.127054 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 13:16:46.131734 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 14 13:16:46.131914 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 13:16:46.136737 systemd[1]: ignition-files.service: Deactivated successfully. Apr 14 13:16:46.136903 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 14 13:16:46.164807 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 14 13:16:46.246121 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 14 13:16:46.250096 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 14 13:16:46.252443 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 13:16:46.270522 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 14 13:16:46.270985 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 13:16:46.283643 ignition[1008]: INFO : Ignition 2.19.0 Apr 14 13:16:46.283643 ignition[1008]: INFO : Stage: umount Apr 14 13:16:46.290111 ignition[1008]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 13:16:46.290111 ignition[1008]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 13:16:46.290111 ignition[1008]: INFO : umount: umount passed Apr 14 13:16:46.290111 ignition[1008]: INFO : Ignition finished successfully Apr 14 13:16:46.312956 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 14 13:16:46.314045 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 14 13:16:46.314168 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 14 13:16:46.328353 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 14 13:16:46.328936 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 14 13:16:46.337950 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 14 13:16:46.338456 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 14 13:16:46.356358 systemd[1]: Stopped target network.target - Network. Apr 14 13:16:46.356518 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 14 13:16:46.356575 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 14 13:16:46.367977 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 14 13:16:46.368110 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 14 13:16:46.376030 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 14 13:16:46.376309 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 14 13:16:46.382439 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 14 13:16:46.382538 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 14 13:16:46.390230 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 14 13:16:46.390306 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 14 13:16:46.393581 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 14 13:16:46.404520 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 14 13:16:46.404920 systemd-networkd[779]: eth0: DHCPv6 lease lost Apr 14 13:16:46.428107 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 14 13:16:46.428294 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 14 13:16:46.436187 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 14 13:16:46.436515 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 14 13:16:46.439448 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 14 13:16:46.439495 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 14 13:16:46.469296 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 14 13:16:46.472119 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 14 13:16:46.472201 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 13:16:46.478125 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 14 13:16:46.478175 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 14 13:16:46.483593 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 14 13:16:46.483688 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 14 13:16:46.489987 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 14 13:16:46.490031 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 13:16:46.496065 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 13:16:46.518966 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 14 13:16:46.519175 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 14 13:16:46.524803 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 14 13:16:46.524996 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 13:16:46.531227 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 14 13:16:46.531311 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 14 13:16:46.535380 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 14 13:16:46.535426 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 13:16:46.538206 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 14 13:16:46.538263 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 14 13:16:46.551606 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 14 13:16:46.551699 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 14 13:16:46.560866 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 13:16:46.560936 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 13:16:46.580206 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 14 13:16:46.585897 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 14 13:16:46.585970 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 13:16:46.591536 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 14 13:16:46.591580 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 13:16:46.598071 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 14 13:16:46.598180 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 13:16:46.603443 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 13:16:46.603497 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:16:46.610064 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 14 13:16:46.610158 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 14 13:16:46.617583 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 14 13:16:46.639288 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 14 13:16:46.664836 systemd[1]: Switching root. Apr 14 13:16:46.691564 systemd-journald[194]: Journal stopped Apr 14 13:16:48.191059 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 14 13:16:48.191130 kernel: SELinux: policy capability network_peer_controls=1 Apr 14 13:16:48.191148 kernel: SELinux: policy capability open_perms=1 Apr 14 13:16:48.191159 kernel: SELinux: policy capability extended_socket_class=1 Apr 14 13:16:48.191174 kernel: SELinux: policy capability always_check_network=0 Apr 14 13:16:48.191187 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 14 13:16:48.191199 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 14 13:16:48.191216 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 14 13:16:48.191228 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 14 13:16:48.191241 kernel: audit: type=1403 audit(1776172606.967:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 14 13:16:48.191255 systemd[1]: Successfully loaded SELinux policy in 53.464ms. Apr 14 13:16:48.191280 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.884ms. Apr 14 13:16:48.191296 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 13:16:48.191309 systemd[1]: Detected virtualization kvm. Apr 14 13:16:48.191320 systemd[1]: Detected architecture x86-64. Apr 14 13:16:48.192119 systemd[1]: Detected first boot. Apr 14 13:16:48.192148 systemd[1]: Initializing machine ID from VM UUID. Apr 14 13:16:48.192162 zram_generator::config[1072]: No configuration found. Apr 14 13:16:48.192179 systemd[1]: Populated /etc with preset unit settings. Apr 14 13:16:48.192193 systemd[1]: Queued start job for default target multi-user.target. Apr 14 13:16:48.192206 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 14 13:16:48.192230 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 14 13:16:48.192246 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 14 13:16:48.192261 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 14 13:16:48.192276 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 14 13:16:48.192291 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 14 13:16:48.192305 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 14 13:16:48.192320 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 14 13:16:48.192335 systemd[1]: Created slice user.slice - User and Session Slice. Apr 14 13:16:48.192352 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 13:16:48.192366 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 13:16:48.192382 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 14 13:16:48.192396 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 14 13:16:48.192410 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 14 13:16:48.192425 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 13:16:48.192443 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 14 13:16:48.192502 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 13:16:48.192517 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 14 13:16:48.192534 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 13:16:48.192549 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 13:16:48.192564 systemd[1]: Reached target slices.target - Slice Units. Apr 14 13:16:48.192580 systemd[1]: Reached target swap.target - Swaps. Apr 14 13:16:48.192594 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 14 13:16:48.192613 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 14 13:16:48.193191 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 14 13:16:48.193247 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 14 13:16:48.193265 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 13:16:48.193278 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 13:16:48.193293 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 13:16:48.193307 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 14 13:16:48.193321 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 14 13:16:48.193334 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 14 13:16:48.193347 systemd[1]: Mounting media.mount - External Media Directory... Apr 14 13:16:48.193361 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:16:48.193375 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 14 13:16:48.193391 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 14 13:16:48.193405 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 14 13:16:48.193470 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 14 13:16:48.193485 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 13:16:48.193498 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 13:16:48.193512 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 14 13:16:48.193524 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 13:16:48.193538 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 13:16:48.193552 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 13:16:48.193570 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 14 13:16:48.193583 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 13:16:48.193598 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 14 13:16:48.193613 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 14 13:16:48.193666 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 14 13:16:48.193681 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 13:16:48.193696 kernel: fuse: init (API version 7.39) Apr 14 13:16:48.193711 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 13:16:48.193724 kernel: loop: module loaded Apr 14 13:16:48.193793 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 14 13:16:48.193831 systemd-journald[1168]: Collecting audit messages is disabled. Apr 14 13:16:48.193860 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 14 13:16:48.193873 systemd-journald[1168]: Journal started Apr 14 13:16:48.193932 systemd-journald[1168]: Runtime Journal (/run/log/journal/103656e0e3a0440cab729d703c839c2e) is 6.0M, max 48.4M, 42.3M free. Apr 14 13:16:48.219475 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 13:16:48.219730 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:16:48.225567 kernel: ACPI: bus type drm_connector registered Apr 14 13:16:48.225677 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 13:16:48.230036 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 14 13:16:48.233056 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 14 13:16:48.236388 systemd[1]: Mounted media.mount - External Media Directory. Apr 14 13:16:48.239068 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 14 13:16:48.242258 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 14 13:16:48.247108 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 14 13:16:48.252292 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 14 13:16:48.257424 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 13:16:48.263216 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 14 13:16:48.264270 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 14 13:16:48.335717 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 13:16:48.336093 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 13:16:48.340530 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 13:16:48.342158 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 13:16:48.346398 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 13:16:48.346582 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 13:16:48.353078 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 14 13:16:48.353258 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 14 13:16:48.356480 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 13:16:48.356737 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 13:16:48.361306 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 13:16:48.368930 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 14 13:16:48.373711 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 14 13:16:48.404413 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 13:16:48.528102 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 14 13:16:48.563189 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 14 13:16:48.633591 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 14 13:16:48.633857 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 14 13:16:48.637709 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 14 13:16:48.645987 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 14 13:16:48.650734 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 13:16:48.659621 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 14 13:16:48.665160 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 13:16:48.677466 systemd-journald[1168]: Time spent on flushing to /var/log/journal/103656e0e3a0440cab729d703c839c2e is 39.541ms for 948 entries. Apr 14 13:16:48.677466 systemd-journald[1168]: System Journal (/var/log/journal/103656e0e3a0440cab729d703c839c2e) is 8.0M, max 195.6M, 187.6M free. Apr 14 13:16:48.893319 systemd-journald[1168]: Received client request to flush runtime journal. Apr 14 13:16:48.670010 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 13:16:48.677986 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 13:16:48.690486 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 14 13:16:48.700314 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 14 13:16:48.709151 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 14 13:16:48.714245 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 14 13:16:48.730048 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 14 13:16:48.893023 udevadm[1213]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 14 13:16:48.905976 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 14 13:16:48.929976 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 13:16:48.941560 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Apr 14 13:16:48.941588 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Apr 14 13:16:48.947737 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 13:16:48.972069 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 14 13:16:49.069151 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 14 13:16:49.080961 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 13:16:49.131350 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Apr 14 13:16:49.131801 systemd-tmpfiles[1232]: ACLs are not supported, ignoring. Apr 14 13:16:49.189013 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 13:16:51.433985 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 14 13:16:51.448258 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 13:16:51.479910 systemd-udevd[1238]: Using default interface naming scheme 'v255'. Apr 14 13:16:51.501913 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 13:16:51.516121 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 13:16:51.537248 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 14 13:16:51.818832 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1257) Apr 14 13:16:51.829339 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 14 13:16:51.899008 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 14 13:16:52.176802 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 14 13:16:52.223549 kernel: ACPI: button: Power Button [PWRF] Apr 14 13:16:52.346241 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 14 13:16:52.347876 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 14 13:16:52.348054 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 14 13:16:52.349675 systemd-networkd[1253]: lo: Link UP Apr 14 13:16:52.349739 systemd-networkd[1253]: lo: Gained carrier Apr 14 13:16:52.353221 systemd-networkd[1253]: Enumeration completed Apr 14 13:16:52.353491 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 13:16:52.359587 systemd-networkd[1253]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:16:52.360136 systemd-networkd[1253]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 13:16:52.363936 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 14 13:16:52.373005 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 14 13:16:52.378542 systemd-networkd[1253]: eth0: Link UP Apr 14 13:16:52.378577 systemd-networkd[1253]: eth0: Gained carrier Apr 14 13:16:52.378604 systemd-networkd[1253]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 13:16:52.505303 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 13:16:52.512792 kernel: mousedev: PS/2 mouse device common for all mice Apr 14 13:16:52.532294 systemd-networkd[1253]: eth0: DHCPv4 address 10.0.0.13/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 13:16:52.544006 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 13:16:52.799111 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 14 13:16:52.829444 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 14 13:16:53.002228 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 13:16:53.023611 lvm[1281]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 13:16:53.131057 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 14 13:16:53.144451 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 13:16:53.206263 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 14 13:16:53.212883 lvm[1286]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 13:16:53.273126 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 14 13:16:53.278345 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 14 13:16:53.283160 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 14 13:16:53.283298 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 13:16:53.287317 systemd[1]: Reached target machines.target - Containers. Apr 14 13:16:53.295058 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 14 13:16:53.313896 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 14 13:16:53.320855 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 14 13:16:53.325176 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 13:16:53.332149 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 14 13:16:53.343148 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 14 13:16:53.426962 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 14 13:16:53.436998 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 14 13:16:53.477386 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 14 13:16:53.494392 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 14 13:16:53.498245 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 14 13:16:53.505076 kernel: loop0: detected capacity change from 0 to 140768 Apr 14 13:16:53.550946 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 14 13:16:53.585245 kernel: loop1: detected capacity change from 0 to 228704 Apr 14 13:16:53.759008 kernel: loop2: detected capacity change from 0 to 142488 Apr 14 13:16:53.958977 kernel: loop3: detected capacity change from 0 to 140768 Apr 14 13:16:54.016950 kernel: loop4: detected capacity change from 0 to 228704 Apr 14 13:16:54.046011 kernel: loop5: detected capacity change from 0 to 142488 Apr 14 13:16:54.116451 (sd-merge)[1306]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 14 13:16:54.117140 (sd-merge)[1306]: Merged extensions into '/usr'. Apr 14 13:16:54.257810 systemd[1]: Reloading requested from client PID 1294 ('systemd-sysext') (unit systemd-sysext.service)... Apr 14 13:16:54.257829 systemd[1]: Reloading... Apr 14 13:16:54.369274 zram_generator::config[1332]: No configuration found. Apr 14 13:16:54.545961 systemd-networkd[1253]: eth0: Gained IPv6LL Apr 14 13:16:54.823902 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:16:55.035657 systemd[1]: Reloading finished in 777 ms. Apr 14 13:16:55.053947 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 14 13:16:55.056138 ldconfig[1290]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 14 13:16:55.063287 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 14 13:16:55.069960 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 14 13:16:55.130357 systemd[1]: Starting ensure-sysext.service... Apr 14 13:16:55.141914 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 13:16:55.164529 systemd[1]: Reloading requested from client PID 1380 ('systemctl') (unit ensure-sysext.service)... Apr 14 13:16:55.164567 systemd[1]: Reloading... Apr 14 13:16:55.404731 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 14 13:16:55.405114 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 14 13:16:55.410170 systemd-tmpfiles[1382]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 14 13:16:55.411091 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Apr 14 13:16:55.411130 systemd-tmpfiles[1382]: ACLs are not supported, ignoring. Apr 14 13:16:55.415549 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 13:16:55.415557 systemd-tmpfiles[1382]: Skipping /boot Apr 14 13:16:55.433016 zram_generator::config[1412]: No configuration found. Apr 14 13:16:55.438321 systemd-tmpfiles[1382]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 13:16:55.438330 systemd-tmpfiles[1382]: Skipping /boot Apr 14 13:16:55.815011 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:16:55.898937 systemd[1]: Reloading finished in 731 ms. Apr 14 13:16:55.915485 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 13:16:55.962868 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 13:16:56.046203 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 14 13:16:56.054182 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 14 13:16:56.071142 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 13:16:56.076321 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 14 13:16:56.088030 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:16:56.088477 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 13:16:56.094982 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 13:16:56.141462 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 13:16:56.165159 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 13:16:56.170962 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 13:16:56.171843 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:16:56.172604 augenrules[1480]: No rules Apr 14 13:16:56.173603 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 14 13:16:56.182144 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 13:16:56.187161 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 14 13:16:56.190954 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 13:16:56.191068 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 13:16:56.195356 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 13:16:56.195587 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 13:16:56.201325 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 13:16:56.201496 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 13:16:56.219808 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:16:56.219991 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 13:16:56.229538 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 13:16:56.243440 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 13:16:56.268836 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 13:16:56.330303 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 13:16:56.330481 systemd-resolved[1464]: Positive Trust Anchors: Apr 14 13:16:56.330487 systemd-resolved[1464]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 13:16:56.330514 systemd-resolved[1464]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 13:16:56.345259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 13:16:56.381893 systemd-resolved[1464]: Defaulting to hostname 'linux'. Apr 14 13:16:56.410118 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 14 13:16:56.412876 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 14 13:16:56.413084 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 13:16:56.416597 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 13:16:56.420289 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 14 13:16:56.423907 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 13:16:56.424050 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 13:16:56.427563 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 13:16:56.427734 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 13:16:56.433396 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 13:16:56.433582 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 13:16:56.438461 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 13:16:56.438631 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 13:16:56.443241 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 14 13:16:56.454878 systemd[1]: Finished ensure-sysext.service. Apr 14 13:16:56.475640 systemd[1]: Reached target network.target - Network. Apr 14 13:16:56.480349 systemd[1]: Reached target network-online.target - Network is Online. Apr 14 13:16:56.484316 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 13:16:56.487341 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 13:16:56.487475 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 13:16:56.505141 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 14 13:16:56.615476 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 14 13:16:56.619381 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 13:16:57.047391 systemd-timesyncd[1516]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 14 13:16:57.047419 systemd-timesyncd[1516]: Initial clock synchronization to Tue 2026-04-14 13:16:57.047327 UTC. Apr 14 13:16:57.047969 systemd-resolved[1464]: Clock change detected. Flushing caches. Apr 14 13:16:57.050250 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 14 13:16:57.054849 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 14 13:16:57.060860 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 14 13:16:57.065423 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 14 13:16:57.065931 systemd[1]: Reached target paths.target - Path Units. Apr 14 13:16:57.070234 systemd[1]: Reached target time-set.target - System Time Set. Apr 14 13:16:57.075280 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 14 13:16:57.078145 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 14 13:16:57.082195 systemd[1]: Reached target timers.target - Timer Units. Apr 14 13:16:57.089508 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 14 13:16:57.101681 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 14 13:16:57.106894 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 14 13:16:57.115026 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 14 13:16:57.122423 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 13:16:57.125523 systemd[1]: Reached target basic.target - Basic System. Apr 14 13:16:57.130120 systemd[1]: System is tainted: cgroupsv1 Apr 14 13:16:57.130250 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 14 13:16:57.130267 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 14 13:16:57.133188 systemd[1]: Starting containerd.service - containerd container runtime... Apr 14 13:16:57.138199 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 14 13:16:57.143266 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 14 13:16:57.149636 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 14 13:16:57.171385 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 14 13:16:57.175867 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 14 13:16:57.179139 jq[1524]: false Apr 14 13:16:57.179719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:16:57.189351 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 14 13:16:57.194302 extend-filesystems[1526]: Found loop3 Apr 14 13:16:57.198904 extend-filesystems[1526]: Found loop4 Apr 14 13:16:57.198904 extend-filesystems[1526]: Found loop5 Apr 14 13:16:57.198904 extend-filesystems[1526]: Found sr0 Apr 14 13:16:57.198904 extend-filesystems[1526]: Found vda Apr 14 13:16:57.198904 extend-filesystems[1526]: Found vda1 Apr 14 13:16:57.198904 extend-filesystems[1526]: Found vda2 Apr 14 13:16:57.198904 extend-filesystems[1526]: Found vda3 Apr 14 13:16:57.198904 extend-filesystems[1526]: Found usr Apr 14 13:16:57.198904 extend-filesystems[1526]: Found vda4 Apr 14 13:16:57.198904 extend-filesystems[1526]: Found vda6 Apr 14 13:16:57.198904 extend-filesystems[1526]: Found vda7 Apr 14 13:16:57.198904 extend-filesystems[1526]: Found vda9 Apr 14 13:16:57.198904 extend-filesystems[1526]: Checking size of /dev/vda9 Apr 14 13:16:57.285857 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 14 13:16:57.260023 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 14 13:16:57.258850 dbus-daemon[1522]: [system] SELinux support is enabled Apr 14 13:16:57.304097 extend-filesystems[1526]: Resized partition /dev/vda9 Apr 14 13:16:57.301642 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 14 13:16:57.308263 extend-filesystems[1538]: resize2fs 1.47.1 (20-May-2024) Apr 14 13:16:57.317437 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 14 13:16:57.326693 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1547) Apr 14 13:16:57.326824 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 14 13:16:57.330126 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 14 13:16:57.353691 extend-filesystems[1538]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 14 13:16:57.353691 extend-filesystems[1538]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 14 13:16:57.353691 extend-filesystems[1538]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 14 13:16:57.368073 extend-filesystems[1526]: Resized filesystem in /dev/vda9 Apr 14 13:16:57.356974 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 14 13:16:57.361671 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 14 13:16:57.363116 systemd[1]: Starting update-engine.service - Update Engine... Apr 14 13:16:57.374873 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 14 13:16:57.383057 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 14 13:16:57.394231 jq[1566]: true Apr 14 13:16:57.403660 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 14 13:16:57.403867 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 14 13:16:57.404150 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 14 13:16:57.404301 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 14 13:16:57.410432 systemd[1]: motdgen.service: Deactivated successfully. Apr 14 13:16:57.410725 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 14 13:16:57.414233 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 14 13:16:57.419907 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 14 13:16:57.420112 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 14 13:16:57.448636 (ntainerd)[1579]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 14 13:16:57.457030 update_engine[1564]: I20260414 13:16:57.453351 1564 main.cc:92] Flatcar Update Engine starting Apr 14 13:16:57.464414 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 14 13:16:57.473637 jq[1578]: true Apr 14 13:16:57.465398 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 14 13:16:57.473594 systemd-logind[1561]: Watching system buttons on /dev/input/event1 (Power Button) Apr 14 13:16:57.473606 systemd-logind[1561]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 14 13:16:57.476453 systemd-logind[1561]: New seat seat0. Apr 14 13:16:57.482422 tar[1577]: linux-amd64/LICENSE Apr 14 13:16:57.482859 update_engine[1564]: I20260414 13:16:57.478441 1564 update_check_scheduler.cc:74] Next update check in 7m0s Apr 14 13:16:57.482464 systemd[1]: Started systemd-logind.service - User Login Management. Apr 14 13:16:57.498847 tar[1577]: linux-amd64/helm Apr 14 13:16:57.522721 systemd[1]: Started update-engine.service - Update Engine. Apr 14 13:16:57.530398 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 14 13:16:57.533899 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 14 13:16:57.539800 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 14 13:16:57.545046 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 14 13:16:57.545215 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 14 13:16:57.556982 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 14 13:16:57.574908 bash[1615]: Updated "/home/core/.ssh/authorized_keys" Apr 14 13:16:57.581813 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 14 13:16:57.658085 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 14 13:16:57.667152 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 14 13:16:57.942578 sshd_keygen[1565]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 14 13:16:57.945859 locksmithd[1616]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 14 13:16:58.178116 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 14 13:16:58.195812 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 14 13:16:58.311006 systemd[1]: issuegen.service: Deactivated successfully. Apr 14 13:16:58.311347 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 14 13:16:58.347361 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 14 13:16:58.479698 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 14 13:16:58.596248 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 14 13:16:58.609976 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 14 13:16:58.613857 systemd[1]: Reached target getty.target - Login Prompts. Apr 14 13:16:59.226753 containerd[1579]: time="2026-04-14T13:16:59.226409666Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 14 13:16:59.354948 containerd[1579]: time="2026-04-14T13:16:59.354416348Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:16:59.442799 containerd[1579]: time="2026-04-14T13:16:59.442605205Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:16:59.447732 containerd[1579]: time="2026-04-14T13:16:59.443201176Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 14 13:16:59.447732 containerd[1579]: time="2026-04-14T13:16:59.443236005Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 14 13:16:59.447732 containerd[1579]: time="2026-04-14T13:16:59.443686772Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 14 13:16:59.447732 containerd[1579]: time="2026-04-14T13:16:59.443756399Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 14 13:16:59.447732 containerd[1579]: time="2026-04-14T13:16:59.443914663Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:16:59.447732 containerd[1579]: time="2026-04-14T13:16:59.443951055Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:16:59.447732 containerd[1579]: time="2026-04-14T13:16:59.444296233Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:16:59.447732 containerd[1579]: time="2026-04-14T13:16:59.444310022Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 14 13:16:59.447732 containerd[1579]: time="2026-04-14T13:16:59.444320172Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:16:59.447732 containerd[1579]: time="2026-04-14T13:16:59.444327894Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 14 13:16:59.447732 containerd[1579]: time="2026-04-14T13:16:59.444406754Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:16:59.447732 containerd[1579]: time="2026-04-14T13:16:59.444734849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 14 13:16:59.448279 containerd[1579]: time="2026-04-14T13:16:59.445806884Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 13:16:59.448279 containerd[1579]: time="2026-04-14T13:16:59.445961856Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 14 13:16:59.448279 containerd[1579]: time="2026-04-14T13:16:59.446217858Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 14 13:16:59.448279 containerd[1579]: time="2026-04-14T13:16:59.447323626Z" level=info msg="metadata content store policy set" policy=shared Apr 14 13:16:59.454067 containerd[1579]: time="2026-04-14T13:16:59.454047525Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 14 13:16:59.454306 containerd[1579]: time="2026-04-14T13:16:59.454274409Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 14 13:16:59.454345 containerd[1579]: time="2026-04-14T13:16:59.454339174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 14 13:16:59.454400 containerd[1579]: time="2026-04-14T13:16:59.454392488Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 14 13:16:59.454470 containerd[1579]: time="2026-04-14T13:16:59.454461836Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 14 13:16:59.454751 containerd[1579]: time="2026-04-14T13:16:59.454710646Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 14 13:16:59.456059 containerd[1579]: time="2026-04-14T13:16:59.456043173Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 14 13:16:59.456393 containerd[1579]: time="2026-04-14T13:16:59.456380028Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 14 13:16:59.456435 containerd[1579]: time="2026-04-14T13:16:59.456429251Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 14 13:16:59.456464 containerd[1579]: time="2026-04-14T13:16:59.456458289Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 14 13:16:59.457735 containerd[1579]: time="2026-04-14T13:16:59.457684070Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 14 13:16:59.457974 containerd[1579]: time="2026-04-14T13:16:59.457962712Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 14 13:16:59.458032 containerd[1579]: time="2026-04-14T13:16:59.458024995Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 14 13:16:59.458092 containerd[1579]: time="2026-04-14T13:16:59.458085977Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 14 13:16:59.458333 containerd[1579]: time="2026-04-14T13:16:59.458323047Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 14 13:16:59.458386 containerd[1579]: time="2026-04-14T13:16:59.458362029Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 14 13:16:59.458414 containerd[1579]: time="2026-04-14T13:16:59.458409121Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 14 13:16:59.458454 containerd[1579]: time="2026-04-14T13:16:59.458446997Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 14 13:16:59.459584 containerd[1579]: time="2026-04-14T13:16:59.459242753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.459999 containerd[1579]: time="2026-04-14T13:16:59.459987650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.460049 containerd[1579]: time="2026-04-14T13:16:59.460041813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.460192 containerd[1579]: time="2026-04-14T13:16:59.460182443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.460226 containerd[1579]: time="2026-04-14T13:16:59.460220802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.460255 containerd[1579]: time="2026-04-14T13:16:59.460249664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.460281 containerd[1579]: time="2026-04-14T13:16:59.460276195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.460319 containerd[1579]: time="2026-04-14T13:16:59.460313410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.460347 containerd[1579]: time="2026-04-14T13:16:59.460341631Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.460386 containerd[1579]: time="2026-04-14T13:16:59.460380086Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.460413 containerd[1579]: time="2026-04-14T13:16:59.460408111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.460441 containerd[1579]: time="2026-04-14T13:16:59.460436012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.460585 containerd[1579]: time="2026-04-14T13:16:59.460575811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.460710 containerd[1579]: time="2026-04-14T13:16:59.460703044Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 14 13:16:59.460829 containerd[1579]: time="2026-04-14T13:16:59.460821836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.460871 containerd[1579]: time="2026-04-14T13:16:59.460864302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.460897 containerd[1579]: time="2026-04-14T13:16:59.460892032Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 14 13:16:59.461078 containerd[1579]: time="2026-04-14T13:16:59.461070570Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 14 13:16:59.461759 containerd[1579]: time="2026-04-14T13:16:59.461744125Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 14 13:16:59.461805 containerd[1579]: time="2026-04-14T13:16:59.461799484Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 14 13:16:59.461834 containerd[1579]: time="2026-04-14T13:16:59.461827596Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 14 13:16:59.461859 containerd[1579]: time="2026-04-14T13:16:59.461853802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.461945 containerd[1579]: time="2026-04-14T13:16:59.461937131Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 14 13:16:59.462029 containerd[1579]: time="2026-04-14T13:16:59.462022514Z" level=info msg="NRI interface is disabled by configuration." Apr 14 13:16:59.462059 containerd[1579]: time="2026-04-14T13:16:59.462053152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 14 13:16:59.466434 containerd[1579]: time="2026-04-14T13:16:59.466001882Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 14 13:16:59.468335 containerd[1579]: time="2026-04-14T13:16:59.468229163Z" level=info msg="Connect containerd service" Apr 14 13:16:59.468856 containerd[1579]: time="2026-04-14T13:16:59.468619977Z" level=info msg="using legacy CRI server" Apr 14 13:16:59.468856 containerd[1579]: time="2026-04-14T13:16:59.468653941Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 14 13:16:59.469523 containerd[1579]: time="2026-04-14T13:16:59.469432441Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 14 13:16:59.478789 containerd[1579]: time="2026-04-14T13:16:59.476095661Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 14 13:16:59.478789 containerd[1579]: time="2026-04-14T13:16:59.477931480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 14 13:16:59.478789 containerd[1579]: time="2026-04-14T13:16:59.478049968Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 14 13:16:59.478789 containerd[1579]: time="2026-04-14T13:16:59.478232158Z" level=info msg="Start subscribing containerd event" Apr 14 13:16:59.478789 containerd[1579]: time="2026-04-14T13:16:59.478369238Z" level=info msg="Start recovering state" Apr 14 13:16:59.483060 containerd[1579]: time="2026-04-14T13:16:59.482669131Z" level=info msg="Start event monitor" Apr 14 13:16:59.483772 containerd[1579]: time="2026-04-14T13:16:59.483395112Z" level=info msg="Start snapshots syncer" Apr 14 13:16:59.483772 containerd[1579]: time="2026-04-14T13:16:59.483652670Z" level=info msg="Start cni network conf syncer for default" Apr 14 13:16:59.483772 containerd[1579]: time="2026-04-14T13:16:59.483666975Z" level=info msg="Start streaming server" Apr 14 13:16:59.497209 systemd[1]: Started containerd.service - containerd container runtime. Apr 14 13:16:59.501366 containerd[1579]: time="2026-04-14T13:16:59.501148107Z" level=info msg="containerd successfully booted in 0.271112s" Apr 14 13:17:00.053811 tar[1577]: linux-amd64/README.md Apr 14 13:17:00.301704 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 14 13:17:02.004669 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:17:02.009887 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 14 13:17:02.012673 systemd[1]: Startup finished in 12.598s (kernel) + 14.668s (userspace) = 27.267s. Apr 14 13:17:02.026955 (kubelet)[1665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 13:17:03.442945 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 14 13:17:03.447779 systemd[1]: Started sshd@0-10.0.0.13:22-10.0.0.1:59838.service - OpenSSH per-connection server daemon (10.0.0.1:59838). Apr 14 13:17:03.516640 kubelet[1665]: E0414 13:17:03.516282 1665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 13:17:03.519033 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 13:17:03.519217 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 13:17:03.520441 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 59838 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:17:03.523356 sshd[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:17:03.533393 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 14 13:17:03.546769 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 14 13:17:03.548767 systemd-logind[1561]: New session 1 of user core. Apr 14 13:17:03.557228 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 14 13:17:03.558834 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 14 13:17:03.566482 (systemd)[1683]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 14 13:17:03.668904 systemd[1683]: Queued start job for default target default.target. Apr 14 13:17:03.669281 systemd[1683]: Created slice app.slice - User Application Slice. Apr 14 13:17:03.669296 systemd[1683]: Reached target paths.target - Paths. Apr 14 13:17:03.669304 systemd[1683]: Reached target timers.target - Timers. Apr 14 13:17:03.678286 systemd[1683]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 14 13:17:03.694014 systemd[1683]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 14 13:17:03.694064 systemd[1683]: Reached target sockets.target - Sockets. Apr 14 13:17:03.694073 systemd[1683]: Reached target basic.target - Basic System. Apr 14 13:17:03.694100 systemd[1683]: Reached target default.target - Main User Target. Apr 14 13:17:03.694120 systemd[1683]: Startup finished in 122ms. Apr 14 13:17:03.694881 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 14 13:17:03.696272 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 14 13:17:03.763796 systemd[1]: Started sshd@1-10.0.0.13:22-10.0.0.1:59852.service - OpenSSH per-connection server daemon (10.0.0.1:59852). Apr 14 13:17:03.797955 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 59852 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:17:03.800307 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:17:03.808356 systemd-logind[1561]: New session 2 of user core. Apr 14 13:17:03.819880 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 14 13:17:03.902681 sshd[1696]: pam_unix(sshd:session): session closed for user core Apr 14 13:17:03.921308 systemd[1]: Started sshd@2-10.0.0.13:22-10.0.0.1:59866.service - OpenSSH per-connection server daemon (10.0.0.1:59866). Apr 14 13:17:03.925197 systemd[1]: sshd@1-10.0.0.13:22-10.0.0.1:59852.service: Deactivated successfully. Apr 14 13:17:03.929421 systemd[1]: session-2.scope: Deactivated successfully. Apr 14 13:17:03.935120 systemd-logind[1561]: Session 2 logged out. Waiting for processes to exit. Apr 14 13:17:03.942769 systemd-logind[1561]: Removed session 2. Apr 14 13:17:03.959318 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 59866 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:17:03.961407 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:17:03.974424 systemd-logind[1561]: New session 3 of user core. Apr 14 13:17:03.992367 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 14 13:17:04.049222 sshd[1701]: pam_unix(sshd:session): session closed for user core Apr 14 13:17:04.057840 systemd[1]: Started sshd@3-10.0.0.13:22-10.0.0.1:59874.service - OpenSSH per-connection server daemon (10.0.0.1:59874). Apr 14 13:17:04.059323 systemd[1]: sshd@2-10.0.0.13:22-10.0.0.1:59866.service: Deactivated successfully. Apr 14 13:17:04.062655 systemd[1]: session-3.scope: Deactivated successfully. Apr 14 13:17:04.063228 systemd-logind[1561]: Session 3 logged out. Waiting for processes to exit. Apr 14 13:17:04.064623 systemd-logind[1561]: Removed session 3. Apr 14 13:17:04.107341 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 59874 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:17:04.108869 sshd[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:17:04.118278 systemd-logind[1561]: New session 4 of user core. Apr 14 13:17:04.130636 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 14 13:17:04.198242 sshd[1709]: pam_unix(sshd:session): session closed for user core Apr 14 13:17:04.207769 systemd[1]: Started sshd@4-10.0.0.13:22-10.0.0.1:59886.service - OpenSSH per-connection server daemon (10.0.0.1:59886). Apr 14 13:17:04.208459 systemd[1]: sshd@3-10.0.0.13:22-10.0.0.1:59874.service: Deactivated successfully. Apr 14 13:17:04.210953 systemd[1]: session-4.scope: Deactivated successfully. Apr 14 13:17:04.213392 systemd-logind[1561]: Session 4 logged out. Waiting for processes to exit. Apr 14 13:17:04.215105 systemd-logind[1561]: Removed session 4. Apr 14 13:17:04.248031 sshd[1717]: Accepted publickey for core from 10.0.0.1 port 59886 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:17:04.251661 sshd[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:17:04.257260 systemd-logind[1561]: New session 5 of user core. Apr 14 13:17:04.266821 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 14 13:17:04.358060 sudo[1724]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 14 13:17:04.358447 sudo[1724]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:17:04.384651 sudo[1724]: pam_unix(sudo:session): session closed for user root Apr 14 13:17:04.389111 sshd[1717]: pam_unix(sshd:session): session closed for user core Apr 14 13:17:04.405426 systemd[1]: Started sshd@5-10.0.0.13:22-10.0.0.1:59898.service - OpenSSH per-connection server daemon (10.0.0.1:59898). Apr 14 13:17:04.406126 systemd[1]: sshd@4-10.0.0.13:22-10.0.0.1:59886.service: Deactivated successfully. Apr 14 13:17:04.409009 systemd[1]: session-5.scope: Deactivated successfully. Apr 14 13:17:04.411616 systemd-logind[1561]: Session 5 logged out. Waiting for processes to exit. Apr 14 13:17:04.413283 systemd-logind[1561]: Removed session 5. Apr 14 13:17:04.462931 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 59898 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:17:04.466754 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:17:04.478977 systemd-logind[1561]: New session 6 of user core. Apr 14 13:17:04.501254 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 14 13:17:04.566739 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 14 13:17:04.566956 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:17:04.573953 sudo[1734]: pam_unix(sudo:session): session closed for user root Apr 14 13:17:04.586285 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 14 13:17:04.586584 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:17:04.662724 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 14 13:17:04.682802 auditctl[1737]: No rules Apr 14 13:17:04.685293 systemd[1]: audit-rules.service: Deactivated successfully. Apr 14 13:17:04.685959 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 14 13:17:04.704785 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 13:17:04.741084 augenrules[1756]: No rules Apr 14 13:17:04.741965 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 13:17:04.744119 sudo[1733]: pam_unix(sudo:session): session closed for user root Apr 14 13:17:04.745994 sshd[1727]: pam_unix(sshd:session): session closed for user core Apr 14 13:17:04.751802 systemd[1]: Started sshd@6-10.0.0.13:22-10.0.0.1:59912.service - OpenSSH per-connection server daemon (10.0.0.1:59912). Apr 14 13:17:04.752348 systemd[1]: sshd@5-10.0.0.13:22-10.0.0.1:59898.service: Deactivated successfully. Apr 14 13:17:04.754590 systemd-logind[1561]: Session 6 logged out. Waiting for processes to exit. Apr 14 13:17:04.754674 systemd[1]: session-6.scope: Deactivated successfully. Apr 14 13:17:04.756115 systemd-logind[1561]: Removed session 6. Apr 14 13:17:04.791821 sshd[1762]: Accepted publickey for core from 10.0.0.1 port 59912 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:17:04.793609 sshd[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:17:04.800061 systemd-logind[1561]: New session 7 of user core. Apr 14 13:17:04.817023 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 14 13:17:04.883291 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 14 13:17:04.884336 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 13:17:05.223960 (dockerd)[1787]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 14 13:17:05.224776 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 14 13:17:05.513503 dockerd[1787]: time="2026-04-14T13:17:05.512929319Z" level=info msg="Starting up" Apr 14 13:17:05.791935 dockerd[1787]: time="2026-04-14T13:17:05.790766660Z" level=info msg="Loading containers: start." Apr 14 13:17:05.955626 kernel: Initializing XFRM netlink socket Apr 14 13:17:06.072289 systemd-networkd[1253]: docker0: Link UP Apr 14 13:17:06.095953 dockerd[1787]: time="2026-04-14T13:17:06.095783518Z" level=info msg="Loading containers: done." Apr 14 13:17:06.117755 dockerd[1787]: time="2026-04-14T13:17:06.117472851Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 14 13:17:06.117755 dockerd[1787]: time="2026-04-14T13:17:06.117684473Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 14 13:17:06.117755 dockerd[1787]: time="2026-04-14T13:17:06.117795298Z" level=info msg="Daemon has completed initialization" Apr 14 13:17:06.161827 dockerd[1787]: time="2026-04-14T13:17:06.161124312Z" level=info msg="API listen on /run/docker.sock" Apr 14 13:17:06.162607 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 14 13:17:06.779802 containerd[1579]: time="2026-04-14T13:17:06.779242425Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 14 13:17:07.439195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3110663089.mount: Deactivated successfully. Apr 14 13:17:08.987021 containerd[1579]: time="2026-04-14T13:17:08.986691878Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=29988857" Apr 14 13:17:08.987021 containerd[1579]: time="2026-04-14T13:17:08.986902412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:08.995743 containerd[1579]: time="2026-04-14T13:17:08.995658524Z" level=info msg="ImageCreate event name:\"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:09.000145 containerd[1579]: time="2026-04-14T13:17:09.000044273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:09.001026 containerd[1579]: time="2026-04-14T13:17:09.000910283Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"29986018\" in 2.221521869s" Apr 14 13:17:09.001026 containerd[1579]: time="2026-04-14T13:17:09.001023530Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\"" Apr 14 13:17:09.046989 containerd[1579]: time="2026-04-14T13:17:09.046519927Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 14 13:17:12.624420 containerd[1579]: time="2026-04-14T13:17:12.624099993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:12.625394 containerd[1579]: time="2026-04-14T13:17:12.625045258Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=26021841" Apr 14 13:17:12.629338 containerd[1579]: time="2026-04-14T13:17:12.627292593Z" level=info msg="ImageCreate event name:\"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:12.644317 containerd[1579]: time="2026-04-14T13:17:12.643914909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:12.646372 containerd[1579]: time="2026-04-14T13:17:12.646257367Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"27552094\" in 3.599580615s" Apr 14 13:17:12.646372 containerd[1579]: time="2026-04-14T13:17:12.646326938Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\"" Apr 14 13:17:12.648279 containerd[1579]: time="2026-04-14T13:17:12.648242030Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 14 13:17:13.739702 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 14 13:17:13.875089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:17:15.614863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:17:15.630369 (kubelet)[2015]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 13:17:16.171790 containerd[1579]: time="2026-04-14T13:17:16.171341972Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:16.177320 containerd[1579]: time="2026-04-14T13:17:16.174203803Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=20162685" Apr 14 13:17:16.179043 containerd[1579]: time="2026-04-14T13:17:16.178986366Z" level=info msg="ImageCreate event name:\"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:16.184035 containerd[1579]: time="2026-04-14T13:17:16.183864175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:16.185253 containerd[1579]: time="2026-04-14T13:17:16.185201522Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"21692956\" in 3.536915954s" Apr 14 13:17:16.185374 containerd[1579]: time="2026-04-14T13:17:16.185254387Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\"" Apr 14 13:17:16.186370 containerd[1579]: time="2026-04-14T13:17:16.186336179Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 14 13:17:16.342668 kubelet[2015]: E0414 13:17:16.341751 2015 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 13:17:16.346069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 13:17:16.346256 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 13:17:18.218179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3589856488.mount: Deactivated successfully. Apr 14 13:17:19.161694 containerd[1579]: time="2026-04-14T13:17:19.161213612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:19.214797 containerd[1579]: time="2026-04-14T13:17:19.213476495Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=31828657" Apr 14 13:17:19.217928 containerd[1579]: time="2026-04-14T13:17:19.217840581Z" level=info msg="ImageCreate event name:\"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:19.281860 containerd[1579]: time="2026-04-14T13:17:19.281371938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:19.287240 containerd[1579]: time="2026-04-14T13:17:19.286949218Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"31827782\" in 3.100523802s" Apr 14 13:17:19.287240 containerd[1579]: time="2026-04-14T13:17:19.287150718Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\"" Apr 14 13:17:19.297315 containerd[1579]: time="2026-04-14T13:17:19.297221307Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 14 13:17:19.941341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1021447947.mount: Deactivated successfully. Apr 14 13:17:21.525823 containerd[1579]: time="2026-04-14T13:17:21.525222467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:21.528676 containerd[1579]: time="2026-04-14T13:17:21.526893657Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 14 13:17:21.530445 containerd[1579]: time="2026-04-14T13:17:21.530089350Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:21.535310 containerd[1579]: time="2026-04-14T13:17:21.534815370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:21.540101 containerd[1579]: time="2026-04-14T13:17:21.539895702Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.242609007s" Apr 14 13:17:21.540101 containerd[1579]: time="2026-04-14T13:17:21.540014497Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 14 13:17:21.541281 containerd[1579]: time="2026-04-14T13:17:21.541200230Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 14 13:17:22.155929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount958568013.mount: Deactivated successfully. Apr 14 13:17:22.166879 containerd[1579]: time="2026-04-14T13:17:22.166319963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:22.167998 containerd[1579]: time="2026-04-14T13:17:22.167826675Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 14 13:17:22.169393 containerd[1579]: time="2026-04-14T13:17:22.169356128Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:22.185812 containerd[1579]: time="2026-04-14T13:17:22.185193721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:22.200774 containerd[1579]: time="2026-04-14T13:17:22.200146231Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 658.855598ms" Apr 14 13:17:22.200774 containerd[1579]: time="2026-04-14T13:17:22.200330279Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 14 13:17:22.202068 containerd[1579]: time="2026-04-14T13:17:22.201948308Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 14 13:17:23.016948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount455402418.mount: Deactivated successfully. Apr 14 13:17:26.544313 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 14 13:17:26.574664 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:17:27.530723 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:17:27.555793 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 13:17:27.753044 containerd[1579]: time="2026-04-14T13:17:27.750477440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:27.753044 containerd[1579]: time="2026-04-14T13:17:27.751782730Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718278" Apr 14 13:17:27.754435 containerd[1579]: time="2026-04-14T13:17:27.754372107Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:27.773121 containerd[1579]: time="2026-04-14T13:17:27.770390104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:17:27.797597 containerd[1579]: time="2026-04-14T13:17:27.796910454Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 5.59487733s" Apr 14 13:17:27.797597 containerd[1579]: time="2026-04-14T13:17:27.797048799Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 14 13:17:27.896710 kubelet[2158]: E0414 13:17:27.896256 2158 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 13:17:27.899428 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 13:17:27.899889 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 13:17:30.816363 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:17:30.831210 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:17:30.861805 systemd[1]: Reloading requested from client PID 2206 ('systemctl') (unit session-7.scope)... Apr 14 13:17:30.861855 systemd[1]: Reloading... Apr 14 13:17:30.987040 zram_generator::config[2245]: No configuration found. Apr 14 13:17:31.447148 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:17:31.547125 systemd[1]: Reloading finished in 684 ms. Apr 14 13:17:31.772784 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:17:31.774223 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:17:31.777440 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 13:17:31.777788 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:17:31.797683 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:17:32.098384 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:17:32.104910 (kubelet)[2308]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 13:17:32.797167 kubelet[2308]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 13:17:32.797167 kubelet[2308]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 13:17:32.797167 kubelet[2308]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 13:17:32.797167 kubelet[2308]: I0414 13:17:32.797174 2308 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 13:17:34.060074 kubelet[2308]: I0414 13:17:34.059693 2308 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 14 13:17:34.060074 kubelet[2308]: I0414 13:17:34.060048 2308 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 13:17:34.061485 kubelet[2308]: I0414 13:17:34.061405 2308 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 13:17:34.123279 kubelet[2308]: E0414 13:17:34.123073 2308 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 13:17:34.132673 kubelet[2308]: I0414 13:17:34.132460 2308 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 13:17:34.208796 kubelet[2308]: E0414 13:17:34.208305 2308 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 13:17:34.208796 kubelet[2308]: I0414 13:17:34.208515 2308 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 14 13:17:34.257239 kubelet[2308]: I0414 13:17:34.256933 2308 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 14 13:17:34.259705 kubelet[2308]: I0414 13:17:34.259460 2308 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 13:17:34.260422 kubelet[2308]: I0414 13:17:34.259692 2308 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 14 13:17:34.260422 kubelet[2308]: I0414 13:17:34.260418 2308 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 13:17:34.260740 kubelet[2308]: I0414 13:17:34.260428 2308 container_manager_linux.go:303] "Creating device plugin manager" Apr 14 13:17:34.260945 kubelet[2308]: I0414 13:17:34.260888 2308 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:17:34.286113 kubelet[2308]: I0414 13:17:34.285377 2308 kubelet.go:480] "Attempting to sync node with API server" Apr 14 13:17:34.286661 kubelet[2308]: I0414 13:17:34.286441 2308 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 13:17:34.286943 kubelet[2308]: I0414 13:17:34.286845 2308 kubelet.go:386] "Adding apiserver pod source" Apr 14 13:17:34.370698 kubelet[2308]: I0414 13:17:34.369776 2308 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 13:17:34.374858 kubelet[2308]: E0414 13:17:34.374801 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 13:17:34.385353 kubelet[2308]: E0414 13:17:34.385125 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 13:17:34.386224 kubelet[2308]: I0414 13:17:34.386158 2308 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 13:17:34.387393 kubelet[2308]: I0414 13:17:34.387312 2308 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 13:17:34.396283 kubelet[2308]: W0414 13:17:34.395468 2308 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 14 13:17:34.412845 kubelet[2308]: I0414 13:17:34.412522 2308 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 14 13:17:34.413277 kubelet[2308]: I0414 13:17:34.413255 2308 server.go:1289] "Started kubelet" Apr 14 13:17:34.415768 kubelet[2308]: I0414 13:17:34.413731 2308 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 13:17:34.422672 kubelet[2308]: I0414 13:17:34.422498 2308 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 13:17:34.424128 kubelet[2308]: I0414 13:17:34.423723 2308 server.go:317] "Adding debug handlers to kubelet server" Apr 14 13:17:34.425022 kubelet[2308]: I0414 13:17:34.423256 2308 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 13:17:34.436440 kubelet[2308]: I0414 13:17:34.423402 2308 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 13:17:34.436440 kubelet[2308]: I0414 13:17:34.434965 2308 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 13:17:34.436440 kubelet[2308]: I0414 13:17:34.435203 2308 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 14 13:17:34.436440 kubelet[2308]: E0414 13:17:34.421625 2308 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a63b9e5e0a28a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 13:17:34.412834981 +0000 UTC m=+2.100645164,LastTimestamp:2026-04-14 13:17:34.412834981 +0000 UTC m=+2.100645164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:17:34.436440 kubelet[2308]: E0414 13:17:34.435844 2308 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 13:17:34.436440 kubelet[2308]: I0414 13:17:34.436042 2308 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 14 13:17:34.436440 kubelet[2308]: I0414 13:17:34.436390 2308 reconciler.go:26] "Reconciler: start to sync state" Apr 14 13:17:34.436440 kubelet[2308]: E0414 13:17:34.436417 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="200ms" Apr 14 13:17:34.437000 kubelet[2308]: E0414 13:17:34.436960 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 13:17:34.444734 kubelet[2308]: I0414 13:17:34.441184 2308 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 13:17:34.453212 kubelet[2308]: E0414 13:17:34.453043 2308 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 13:17:34.455147 kubelet[2308]: I0414 13:17:34.455098 2308 factory.go:223] Registration of the containerd container factory successfully Apr 14 13:17:34.455147 kubelet[2308]: I0414 13:17:34.455128 2308 factory.go:223] Registration of the systemd container factory successfully Apr 14 13:17:34.506173 kubelet[2308]: I0414 13:17:34.506098 2308 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 13:17:34.506173 kubelet[2308]: I0414 13:17:34.506150 2308 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 13:17:34.506316 kubelet[2308]: I0414 13:17:34.506203 2308 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:17:34.509957 kubelet[2308]: I0414 13:17:34.509902 2308 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 14 13:17:34.517274 kubelet[2308]: I0414 13:17:34.517066 2308 policy_none.go:49] "None policy: Start" Apr 14 13:17:34.518287 kubelet[2308]: I0414 13:17:34.517505 2308 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 14 13:17:34.519400 kubelet[2308]: I0414 13:17:34.518200 2308 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 14 13:17:34.519625 kubelet[2308]: I0414 13:17:34.519479 2308 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 14 13:17:34.519951 kubelet[2308]: I0414 13:17:34.519901 2308 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 13:17:34.520044 kubelet[2308]: I0414 13:17:34.520012 2308 kubelet.go:2436] "Starting kubelet main sync loop" Apr 14 13:17:34.520231 kubelet[2308]: I0414 13:17:34.520108 2308 state_mem.go:35] "Initializing new in-memory state store" Apr 14 13:17:34.520231 kubelet[2308]: E0414 13:17:34.520104 2308 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 13:17:34.521056 kubelet[2308]: E0414 13:17:34.520995 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 13:17:34.533232 kubelet[2308]: E0414 13:17:34.533146 2308 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 13:17:34.534727 kubelet[2308]: I0414 13:17:34.534049 2308 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 13:17:34.534727 kubelet[2308]: I0414 13:17:34.534337 2308 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 13:17:34.535288 kubelet[2308]: I0414 13:17:34.535221 2308 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 13:17:34.549824 kubelet[2308]: E0414 13:17:34.549701 2308 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 13:17:34.550064 kubelet[2308]: E0414 13:17:34.549888 2308 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 13:17:34.642207 kubelet[2308]: E0414 13:17:34.640142 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="400ms" Apr 14 13:17:34.642207 kubelet[2308]: I0414 13:17:34.641402 2308 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:17:34.642207 kubelet[2308]: E0414 13:17:34.642122 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 14 13:17:34.642319 kubelet[2308]: I0414 13:17:34.642295 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 14 13:17:34.673006 kubelet[2308]: E0414 13:17:34.672876 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:34.674745 kubelet[2308]: E0414 13:17:34.673479 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:34.677942 kubelet[2308]: E0414 13:17:34.677817 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:35.046826 kubelet[2308]: E0414 13:17:35.046618 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="800ms" Apr 14 13:17:35.047102 kubelet[2308]: I0414 13:17:35.047071 2308 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:17:35.047669 kubelet[2308]: E0414 13:17:35.047596 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 14 13:17:35.148468 kubelet[2308]: I0414 13:17:35.148178 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebc9914d822867a0887622bbfbeed705-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ebc9914d822867a0887622bbfbeed705\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:17:35.148468 kubelet[2308]: I0414 13:17:35.148398 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:17:35.148468 kubelet[2308]: I0414 13:17:35.148597 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:17:35.149312 kubelet[2308]: I0414 13:17:35.148671 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebc9914d822867a0887622bbfbeed705-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ebc9914d822867a0887622bbfbeed705\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:17:35.149312 kubelet[2308]: I0414 13:17:35.148686 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebc9914d822867a0887622bbfbeed705-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ebc9914d822867a0887622bbfbeed705\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:17:35.149312 kubelet[2308]: I0414 13:17:35.148751 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:17:35.149312 kubelet[2308]: I0414 13:17:35.148854 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:17:35.149312 kubelet[2308]: I0414 13:17:35.148885 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:17:35.276275 kubelet[2308]: E0414 13:17:35.275771 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:35.311617 kubelet[2308]: E0414 13:17:35.309163 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:35.351140 containerd[1579]: time="2026-04-14T13:17:35.350897702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,}" Apr 14 13:17:35.352007 containerd[1579]: time="2026-04-14T13:17:35.350919993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,}" Apr 14 13:17:35.416843 kubelet[2308]: E0414 13:17:35.416633 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 13:17:35.443186 kubelet[2308]: E0414 13:17:35.442942 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 13:17:35.466925 kubelet[2308]: I0414 13:17:35.466339 2308 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:17:35.479857 kubelet[2308]: E0414 13:17:35.479043 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 14 13:17:35.504086 kubelet[2308]: E0414 13:17:35.503909 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 13:17:35.578763 kubelet[2308]: E0414 13:17:35.578222 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:35.584497 containerd[1579]: time="2026-04-14T13:17:35.582750241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ebc9914d822867a0887622bbfbeed705,Namespace:kube-system,Attempt:0,}" Apr 14 13:17:35.597944 kubelet[2308]: E0414 13:17:35.597803 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 13:17:35.853797 kubelet[2308]: E0414 13:17:35.849693 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="1.6s" Apr 14 13:17:36.080232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2512861485.mount: Deactivated successfully. Apr 14 13:17:36.111750 containerd[1579]: time="2026-04-14T13:17:36.110479444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:17:36.113081 containerd[1579]: time="2026-04-14T13:17:36.112459921Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:17:36.114267 containerd[1579]: time="2026-04-14T13:17:36.114189091Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 14 13:17:36.119745 containerd[1579]: time="2026-04-14T13:17:36.118211124Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:17:36.120091 containerd[1579]: time="2026-04-14T13:17:36.119776027Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 13:17:36.122520 containerd[1579]: time="2026-04-14T13:17:36.122083044Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:17:36.126021 containerd[1579]: time="2026-04-14T13:17:36.124938752Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 13:17:36.131934 containerd[1579]: time="2026-04-14T13:17:36.131377560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 13:17:36.135869 containerd[1579]: time="2026-04-14T13:17:36.135768621Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 784.62153ms" Apr 14 13:17:36.136363 containerd[1579]: time="2026-04-14T13:17:36.136095459Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 784.46212ms" Apr 14 13:17:36.136936 containerd[1579]: time="2026-04-14T13:17:36.136852749Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 551.448755ms" Apr 14 13:17:36.432483 kubelet[2308]: E0414 13:17:36.432203 2308 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 13:17:36.433071 kubelet[2308]: I0414 13:17:36.432993 2308 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:17:36.467038 kubelet[2308]: E0414 13:17:36.466916 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 14 13:17:37.363320 containerd[1579]: time="2026-04-14T13:17:37.361237751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:17:37.363320 containerd[1579]: time="2026-04-14T13:17:37.362244278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:17:37.363320 containerd[1579]: time="2026-04-14T13:17:37.362255566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:17:37.369799 containerd[1579]: time="2026-04-14T13:17:37.365208163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:17:37.397109 containerd[1579]: time="2026-04-14T13:17:37.388093393Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:17:37.397109 containerd[1579]: time="2026-04-14T13:17:37.388232120Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:17:37.397109 containerd[1579]: time="2026-04-14T13:17:37.388244113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:17:37.397109 containerd[1579]: time="2026-04-14T13:17:37.389238803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:17:37.401565 containerd[1579]: time="2026-04-14T13:17:37.400196715Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:17:37.401565 containerd[1579]: time="2026-04-14T13:17:37.400347443Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:17:37.401565 containerd[1579]: time="2026-04-14T13:17:37.400359978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:17:37.401565 containerd[1579]: time="2026-04-14T13:17:37.400477453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:17:37.638342 kubelet[2308]: E0414 13:17:37.637394 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 13:17:37.642046 kubelet[2308]: E0414 13:17:37.641949 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.13:6443: connect: connection refused" interval="3.2s" Apr 14 13:17:37.943076 containerd[1579]: time="2026-04-14T13:17:37.942929569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c1daf096b35f89128a39c02ce2390d78cf7315ee1293c9feb081f3460391859\"" Apr 14 13:17:37.968215 kubelet[2308]: E0414 13:17:37.967732 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:38.050669 containerd[1579]: time="2026-04-14T13:17:38.050252505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,} returns sandbox id \"a6985a73340d46b9950ef1958f21553514054108fd318ec6e2841ce6f50de949\"" Apr 14 13:17:38.073305 kubelet[2308]: E0414 13:17:38.073218 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:38.103441 containerd[1579]: time="2026-04-14T13:17:38.103091390Z" level=info msg="CreateContainer within sandbox \"6c1daf096b35f89128a39c02ce2390d78cf7315ee1293c9feb081f3460391859\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 14 13:17:38.104290 kubelet[2308]: I0414 13:17:38.104154 2308 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:17:38.104733 containerd[1579]: time="2026-04-14T13:17:38.104691233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ebc9914d822867a0887622bbfbeed705,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3e75668b5e4cc2ebc0597c7880fc50a4c5fcc2cacad201209c9471165d9b6ea\"" Apr 14 13:17:38.104949 kubelet[2308]: E0414 13:17:38.104906 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.13:6443/api/v1/nodes\": dial tcp 10.0.0.13:6443: connect: connection refused" node="localhost" Apr 14 13:17:38.120735 kubelet[2308]: E0414 13:17:38.119355 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:38.121123 containerd[1579]: time="2026-04-14T13:17:38.120902795Z" level=info msg="CreateContainer within sandbox \"a6985a73340d46b9950ef1958f21553514054108fd318ec6e2841ce6f50de949\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 14 13:17:38.160316 containerd[1579]: time="2026-04-14T13:17:38.160057954Z" level=info msg="CreateContainer within sandbox \"d3e75668b5e4cc2ebc0597c7880fc50a4c5fcc2cacad201209c9471165d9b6ea\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 14 13:17:38.373930 kubelet[2308]: E0414 13:17:38.372881 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 13:17:38.373930 kubelet[2308]: E0414 13:17:38.373264 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 13:17:38.482992 kubelet[2308]: E0414 13:17:38.482769 2308 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.13:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.13:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a63b9e5e0a28a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 13:17:34.412834981 +0000 UTC m=+2.100645164,LastTimestamp:2026-04-14 13:17:34.412834981 +0000 UTC m=+2.100645164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:17:38.486981 containerd[1579]: time="2026-04-14T13:17:38.486915657Z" level=info msg="CreateContainer within sandbox \"6c1daf096b35f89128a39c02ce2390d78cf7315ee1293c9feb081f3460391859\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d968e5901e139a31f5fe2e10bab05bfc1b7422ea305eb7aca01f64228d15b0f5\"" Apr 14 13:17:38.489049 kubelet[2308]: E0414 13:17:38.488510 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.13:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 13:17:38.491134 containerd[1579]: time="2026-04-14T13:17:38.489388224Z" level=info msg="StartContainer for \"d968e5901e139a31f5fe2e10bab05bfc1b7422ea305eb7aca01f64228d15b0f5\"" Apr 14 13:17:38.501419 containerd[1579]: time="2026-04-14T13:17:38.501028080Z" level=info msg="CreateContainer within sandbox \"a6985a73340d46b9950ef1958f21553514054108fd318ec6e2841ce6f50de949\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0\"" Apr 14 13:17:38.509006 containerd[1579]: time="2026-04-14T13:17:38.508972457Z" level=info msg="StartContainer for \"2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0\"" Apr 14 13:17:38.522743 containerd[1579]: time="2026-04-14T13:17:38.522503394Z" level=info msg="CreateContainer within sandbox \"d3e75668b5e4cc2ebc0597c7880fc50a4c5fcc2cacad201209c9471165d9b6ea\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0a133c8ae64573bb1121950e970fd7939bf38491f6d383d62ad8ec14753aeef0\"" Apr 14 13:17:38.547823 containerd[1579]: time="2026-04-14T13:17:38.547501163Z" level=info msg="StartContainer for \"0a133c8ae64573bb1121950e970fd7939bf38491f6d383d62ad8ec14753aeef0\"" Apr 14 13:17:38.835865 containerd[1579]: time="2026-04-14T13:17:38.835627728Z" level=info msg="StartContainer for \"d968e5901e139a31f5fe2e10bab05bfc1b7422ea305eb7aca01f64228d15b0f5\" returns successfully" Apr 14 13:17:38.935057 containerd[1579]: time="2026-04-14T13:17:38.932920957Z" level=info msg="StartContainer for \"2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0\" returns successfully" Apr 14 13:17:38.943832 containerd[1579]: time="2026-04-14T13:17:38.933344368Z" level=info msg="StartContainer for \"0a133c8ae64573bb1121950e970fd7939bf38491f6d383d62ad8ec14753aeef0\" returns successfully" Apr 14 13:17:39.065596 kubelet[2308]: E0414 13:17:39.065208 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:39.145687 kubelet[2308]: E0414 13:17:39.142866 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:39.151350 kubelet[2308]: E0414 13:17:39.151291 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:39.159135 kubelet[2308]: E0414 13:17:39.151524 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:39.673984 kubelet[2308]: E0414 13:17:39.673300 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:39.674280 kubelet[2308]: E0414 13:17:39.674192 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:40.646819 kubelet[2308]: E0414 13:17:40.646673 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:40.658487 kubelet[2308]: E0414 13:17:40.655072 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:40.679018 kubelet[2308]: E0414 13:17:40.678888 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:40.679302 kubelet[2308]: E0414 13:17:40.679243 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:41.380910 kubelet[2308]: I0414 13:17:41.374914 2308 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:17:42.080522 kubelet[2308]: E0414 13:17:42.080159 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:42.106223 kubelet[2308]: E0414 13:17:42.087041 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:42.291109 kubelet[2308]: E0414 13:17:42.286510 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:42.314999 kubelet[2308]: E0414 13:17:42.314118 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:42.784750 update_engine[1564]: I20260414 13:17:42.784045 1564 update_attempter.cc:509] Updating boot flags... Apr 14 13:17:43.234140 kubelet[2308]: E0414 13:17:43.233451 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:43.278714 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (2600) Apr 14 13:17:43.307609 kubelet[2308]: E0414 13:17:43.306467 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:44.768400 kubelet[2308]: E0414 13:17:44.768038 2308 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 13:17:45.020274 kubelet[2308]: E0414 13:17:45.019470 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 13:17:45.020709 kubelet[2308]: E0414 13:17:45.020422 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:49.962161 kubelet[2308]: E0414 13:17:49.884205 2308 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a63b9e5e0a28a5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 13:17:34.412834981 +0000 UTC m=+2.100645164,LastTimestamp:2026-04-14 13:17:34.412834981 +0000 UTC m=+2.100645164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 13:17:50.002289 kubelet[2308]: I0414 13:17:50.002007 2308 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 13:17:50.008522 kubelet[2308]: I0414 13:17:50.004863 2308 apiserver.go:52] "Watching apiserver" Apr 14 13:17:50.090194 kubelet[2308]: I0414 13:17:50.071773 2308 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 13:17:50.175618 kubelet[2308]: I0414 13:17:50.175515 2308 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 13:17:50.421620 kubelet[2308]: E0414 13:17:50.420002 2308 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 14 13:17:50.456311 kubelet[2308]: E0414 13:17:50.453787 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:50.483223 kubelet[2308]: I0414 13:17:50.477367 2308 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 14 13:17:50.524493 kubelet[2308]: E0414 13:17:50.517813 2308 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 14 13:17:50.524493 kubelet[2308]: I0414 13:17:50.518264 2308 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 13:17:51.183355 kubelet[2308]: I0414 13:17:51.182945 2308 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 13:17:51.252647 kubelet[2308]: E0414 13:17:51.252318 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:51.280005 kubelet[2308]: I0414 13:17:51.279862 2308 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 13:17:51.322914 kubelet[2308]: E0414 13:17:51.308483 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:51.322914 kubelet[2308]: E0414 13:17:51.318871 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:51.569071 kubelet[2308]: I0414 13:17:51.563502 2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.56074348 podStartE2EDuration="1.56074348s" podCreationTimestamp="2026-04-14 13:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:17:51.559995005 +0000 UTC m=+19.247805200" watchObservedRunningTime="2026-04-14 13:17:51.56074348 +0000 UTC m=+19.248553660" Apr 14 13:17:52.511046 kubelet[2308]: E0414 13:17:52.510722 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:17:54.720899 kubelet[2308]: I0414 13:17:54.713047 2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.712990636 podStartE2EDuration="3.712990636s" podCreationTimestamp="2026-04-14 13:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:17:54.711985825 +0000 UTC m=+22.399796018" watchObservedRunningTime="2026-04-14 13:17:54.712990636 +0000 UTC m=+22.400800825" Apr 14 13:17:55.275401 kubelet[2308]: I0414 13:17:55.270009 2308 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=4.26996093 podStartE2EDuration="4.26996093s" podCreationTimestamp="2026-04-14 13:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:17:55.269660759 +0000 UTC m=+22.957470945" watchObservedRunningTime="2026-04-14 13:17:55.26996093 +0000 UTC m=+22.957771116" Apr 14 13:17:55.726517 kubelet[2308]: E0414 13:17:55.725387 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:00.590282 systemd[1]: Reloading requested from client PID 2608 ('systemctl') (unit session-7.scope)... Apr 14 13:18:00.595321 systemd[1]: Reloading... Apr 14 13:18:01.111478 kubelet[2308]: E0414 13:18:01.111315 2308 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:01.266058 zram_generator::config[2651]: No configuration found. Apr 14 13:18:02.146930 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 13:18:02.846257 systemd[1]: Reloading finished in 2213 ms. Apr 14 13:18:03.051181 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:18:03.131630 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 13:18:03.132890 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:18:03.249952 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 13:18:04.812479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 13:18:04.858379 (kubelet)[2702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 13:18:06.813323 sudo[2714]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Apr 14 13:18:06.814063 sudo[2714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Apr 14 13:18:08.651903 kubelet[2702]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 13:18:08.651903 kubelet[2702]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 13:18:08.651903 kubelet[2702]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 13:18:08.665116 kubelet[2702]: I0414 13:18:08.660793 2702 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 13:18:09.441169 kubelet[2702]: I0414 13:18:09.437340 2702 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 14 13:18:09.445051 kubelet[2702]: I0414 13:18:09.441415 2702 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 13:18:09.482366 kubelet[2702]: I0414 13:18:09.480827 2702 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 13:18:09.682904 kubelet[2702]: I0414 13:18:09.654967 2702 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 14 13:18:10.126710 kubelet[2702]: I0414 13:18:10.115513 2702 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 13:18:10.440861 kubelet[2702]: E0414 13:18:10.437686 2702 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 13:18:10.440861 kubelet[2702]: I0414 13:18:10.438026 2702 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 14 13:18:10.633778 kubelet[2702]: I0414 13:18:10.633276 2702 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 14 13:18:10.641850 kubelet[2702]: I0414 13:18:10.641166 2702 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 13:18:10.662878 kubelet[2702]: I0414 13:18:10.641810 2702 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 14 13:18:10.662878 kubelet[2702]: I0414 13:18:10.662931 2702 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 13:18:10.664079 kubelet[2702]: I0414 13:18:10.663097 2702 container_manager_linux.go:303] "Creating device plugin manager" Apr 14 13:18:10.664079 kubelet[2702]: I0414 13:18:10.663345 2702 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:18:10.664305 kubelet[2702]: I0414 13:18:10.664245 2702 kubelet.go:480] "Attempting to sync node with API server" Apr 14 13:18:10.664305 kubelet[2702]: I0414 13:18:10.664295 2702 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 13:18:10.664405 kubelet[2702]: I0414 13:18:10.664374 2702 kubelet.go:386] "Adding apiserver pod source" Apr 14 13:18:10.664405 kubelet[2702]: I0414 13:18:10.664393 2702 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 13:18:10.857827 kubelet[2702]: I0414 13:18:10.853093 2702 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 13:18:10.974211 kubelet[2702]: I0414 13:18:10.959055 2702 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 13:18:11.119225 kubelet[2702]: I0414 13:18:11.118877 2702 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 14 13:18:11.119225 kubelet[2702]: I0414 13:18:11.118969 2702 server.go:1289] "Started kubelet" Apr 14 13:18:11.119706 kubelet[2702]: I0414 13:18:11.119633 2702 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 13:18:11.120073 kubelet[2702]: I0414 13:18:11.119977 2702 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 13:18:11.128970 kubelet[2702]: I0414 13:18:11.127939 2702 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 13:18:11.439239 kubelet[2702]: I0414 13:18:11.433393 2702 server.go:317] "Adding debug handlers to kubelet server" Apr 14 13:18:11.457699 kubelet[2702]: I0414 13:18:11.457216 2702 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 13:18:11.460970 kubelet[2702]: I0414 13:18:11.459494 2702 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 13:18:11.460970 kubelet[2702]: I0414 13:18:11.459991 2702 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 14 13:18:11.533726 kubelet[2702]: I0414 13:18:11.533611 2702 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 14 13:18:11.574220 kubelet[2702]: I0414 13:18:11.573719 2702 reconciler.go:26] "Reconciler: start to sync state" Apr 14 13:18:11.574895 kubelet[2702]: I0414 13:18:11.574843 2702 factory.go:223] Registration of the systemd container factory successfully Apr 14 13:18:11.575312 kubelet[2702]: I0414 13:18:11.575220 2702 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 13:18:11.744898 kubelet[2702]: E0414 13:18:11.743891 2702 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 13:18:11.861825 kubelet[2702]: I0414 13:18:11.855086 2702 apiserver.go:52] "Watching apiserver" Apr 14 13:18:11.861825 kubelet[2702]: I0414 13:18:11.859168 2702 factory.go:223] Registration of the containerd container factory successfully Apr 14 13:18:13.482789 sudo[2714]: pam_unix(sudo:session): session closed for user root Apr 14 13:18:15.334464 kubelet[2702]: I0414 13:18:15.334016 2702 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 14 13:18:15.759006 kubelet[2702]: I0414 13:18:15.717448 2702 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 14 13:18:15.873184 kubelet[2702]: I0414 13:18:15.868992 2702 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 14 13:18:16.103240 kubelet[2702]: I0414 13:18:15.939749 2702 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 13:18:16.103240 kubelet[2702]: I0414 13:18:15.940260 2702 kubelet.go:2436] "Starting kubelet main sync loop" Apr 14 13:18:16.103240 kubelet[2702]: E0414 13:18:15.942299 2702 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 13:18:16.531175 kubelet[2702]: E0414 13:18:16.530801 2702 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 13:18:16.748401 kubelet[2702]: E0414 13:18:16.747280 2702 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 13:18:17.348976 kubelet[2702]: E0414 13:18:17.336686 2702 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 13:18:18.265505 kubelet[2702]: E0414 13:18:18.241284 2702 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 13:18:20.070472 kubelet[2702]: E0414 13:18:19.959124 2702 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 14 13:18:21.404587 kubelet[2702]: I0414 13:18:21.404321 2702 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 13:18:21.404587 kubelet[2702]: I0414 13:18:21.404382 2702 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 13:18:21.404587 kubelet[2702]: I0414 13:18:21.404458 2702 state_mem.go:36] "Initialized new in-memory state store" Apr 14 13:18:21.431301 kubelet[2702]: I0414 13:18:21.429729 2702 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 14 13:18:21.443034 kubelet[2702]: I0414 13:18:21.432049 2702 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 14 13:18:21.448198 kubelet[2702]: I0414 13:18:21.446861 2702 policy_none.go:49] "None policy: Start" Apr 14 13:18:21.448781 kubelet[2702]: I0414 13:18:21.447444 2702 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 14 13:18:21.450158 kubelet[2702]: I0414 13:18:21.450047 2702 state_mem.go:35] "Initializing new in-memory state store" Apr 14 13:18:21.451703 kubelet[2702]: I0414 13:18:21.451653 2702 state_mem.go:75] "Updated machine memory state" Apr 14 13:18:21.633826 kubelet[2702]: E0414 13:18:21.631385 2702 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 13:18:21.640074 kubelet[2702]: I0414 13:18:21.639943 2702 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 13:18:21.640074 kubelet[2702]: I0414 13:18:21.639983 2702 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 13:18:21.652162 kubelet[2702]: I0414 13:18:21.648360 2702 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 13:18:21.762140 kubelet[2702]: E0414 13:18:21.762035 2702 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 13:18:22.037657 kubelet[2702]: I0414 13:18:22.032454 2702 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 13:18:22.405382 kubelet[2702]: I0414 13:18:22.404333 2702 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 14 13:18:22.409754 kubelet[2702]: I0414 13:18:22.408857 2702 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 13:18:22.419518 kubelet[2702]: I0414 13:18:22.415529 2702 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 14 13:18:22.428917 containerd[1579]: time="2026-04-14T13:18:22.427590232Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 14 13:18:22.555066 kubelet[2702]: I0414 13:18:22.548852 2702 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 14 13:18:23.335622 kubelet[2702]: I0414 13:18:23.335194 2702 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 13:18:23.336673 kubelet[2702]: I0414 13:18:23.336036 2702 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 14 13:18:23.374046 kubelet[2702]: I0414 13:18:23.373920 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebc9914d822867a0887622bbfbeed705-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ebc9914d822867a0887622bbfbeed705\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:18:23.416869 kubelet[2702]: I0414 13:18:23.410530 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebc9914d822867a0887622bbfbeed705-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ebc9914d822867a0887622bbfbeed705\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:18:23.423497 kubelet[2702]: I0414 13:18:23.421969 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:18:23.427832 kubelet[2702]: I0414 13:18:23.427394 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:18:23.427832 kubelet[2702]: I0414 13:18:23.427462 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:18:23.427832 kubelet[2702]: I0414 13:18:23.427478 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a71a6e05-4a2c-4269-8c67-52ceb438356d-kube-proxy\") pod \"kube-proxy-f8gk5\" (UID: \"a71a6e05-4a2c-4269-8c67-52ceb438356d\") " pod="kube-system/kube-proxy-f8gk5" Apr 14 13:18:23.427832 kubelet[2702]: I0414 13:18:23.427488 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a71a6e05-4a2c-4269-8c67-52ceb438356d-lib-modules\") pod \"kube-proxy-f8gk5\" (UID: \"a71a6e05-4a2c-4269-8c67-52ceb438356d\") " pod="kube-system/kube-proxy-f8gk5" Apr 14 13:18:23.427832 kubelet[2702]: I0414 13:18:23.427501 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjbcd\" (UniqueName: \"kubernetes.io/projected/a71a6e05-4a2c-4269-8c67-52ceb438356d-kube-api-access-wjbcd\") pod \"kube-proxy-f8gk5\" (UID: \"a71a6e05-4a2c-4269-8c67-52ceb438356d\") " pod="kube-system/kube-proxy-f8gk5" Apr 14 13:18:23.455774 kubelet[2702]: I0414 13:18:23.427589 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebc9914d822867a0887622bbfbeed705-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ebc9914d822867a0887622bbfbeed705\") " pod="kube-system/kube-apiserver-localhost" Apr 14 13:18:23.455774 kubelet[2702]: I0414 13:18:23.427601 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:18:23.455774 kubelet[2702]: I0414 13:18:23.427638 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 13:18:23.455774 kubelet[2702]: I0414 13:18:23.427653 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 14 13:18:23.455774 kubelet[2702]: I0414 13:18:23.427666 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a71a6e05-4a2c-4269-8c67-52ceb438356d-xtables-lock\") pod \"kube-proxy-f8gk5\" (UID: \"a71a6e05-4a2c-4269-8c67-52ceb438356d\") " pod="kube-system/kube-proxy-f8gk5" Apr 14 13:18:23.455774 kubelet[2702]: E0414 13:18:23.427899 2702 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 14 13:18:23.914944 kubelet[2702]: E0414 13:18:23.913252 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:24.387218 kubelet[2702]: E0414 13:18:24.379187 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:24.675236 kubelet[2702]: E0414 13:18:24.636800 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:25.420640 kubelet[2702]: E0414 13:18:25.406741 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:25.477632 kubelet[2702]: E0414 13:18:25.460429 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.442s" Apr 14 13:18:25.661081 kubelet[2702]: E0414 13:18:25.655828 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:25.785760 containerd[1579]: time="2026-04-14T13:18:25.784342046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f8gk5,Uid:a71a6e05-4a2c-4269-8c67-52ceb438356d,Namespace:kube-system,Attempt:0,}" Apr 14 13:18:25.980268 kubelet[2702]: I0414 13:18:25.979891 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d09ec03a-4296-44dc-b569-d7f061ca22b0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-k28qz\" (UID: \"d09ec03a-4296-44dc-b569-d7f061ca22b0\") " pod="kube-system/cilium-operator-6c4d7847fc-k28qz" Apr 14 13:18:26.009230 kubelet[2702]: I0414 13:18:26.009075 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7znzz\" (UniqueName: \"kubernetes.io/projected/d09ec03a-4296-44dc-b569-d7f061ca22b0-kube-api-access-7znzz\") pod \"cilium-operator-6c4d7847fc-k28qz\" (UID: \"d09ec03a-4296-44dc-b569-d7f061ca22b0\") " pod="kube-system/cilium-operator-6c4d7847fc-k28qz" Apr 14 13:18:26.209314 kubelet[2702]: I0414 13:18:26.208611 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-cilium-cgroup\") pod \"cilium-mcfj8\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " pod="kube-system/cilium-mcfj8" Apr 14 13:18:26.209314 kubelet[2702]: I0414 13:18:26.208639 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29979523-b9ef-4e95-ada7-13b2d8b91c40-cilium-config-path\") pod \"cilium-mcfj8\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " pod="kube-system/cilium-mcfj8" Apr 14 13:18:26.209314 kubelet[2702]: I0414 13:18:26.208654 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-bpf-maps\") pod \"cilium-mcfj8\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " pod="kube-system/cilium-mcfj8" Apr 14 13:18:26.209314 kubelet[2702]: I0414 13:18:26.208668 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-host-proc-sys-kernel\") pod \"cilium-mcfj8\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " pod="kube-system/cilium-mcfj8" Apr 14 13:18:26.209314 kubelet[2702]: I0414 13:18:26.208683 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-cni-path\") pod \"cilium-mcfj8\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " pod="kube-system/cilium-mcfj8" Apr 14 13:18:26.209314 kubelet[2702]: I0414 13:18:26.208733 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-hostproc\") pod \"cilium-mcfj8\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " pod="kube-system/cilium-mcfj8" Apr 14 13:18:26.214599 kubelet[2702]: I0414 13:18:26.208747 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-host-proc-sys-net\") pod \"cilium-mcfj8\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " pod="kube-system/cilium-mcfj8" Apr 14 13:18:26.214599 kubelet[2702]: I0414 13:18:26.208783 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29979523-b9ef-4e95-ada7-13b2d8b91c40-clustermesh-secrets\") pod \"cilium-mcfj8\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " pod="kube-system/cilium-mcfj8" Apr 14 13:18:26.214599 kubelet[2702]: I0414 13:18:26.208795 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-xtables-lock\") pod \"cilium-mcfj8\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " pod="kube-system/cilium-mcfj8" Apr 14 13:18:26.214599 kubelet[2702]: I0414 13:18:26.208805 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29979523-b9ef-4e95-ada7-13b2d8b91c40-hubble-tls\") pod \"cilium-mcfj8\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " pod="kube-system/cilium-mcfj8" Apr 14 13:18:26.214599 kubelet[2702]: I0414 13:18:26.208817 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-cilium-run\") pod \"cilium-mcfj8\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " pod="kube-system/cilium-mcfj8" Apr 14 13:18:26.214599 kubelet[2702]: I0414 13:18:26.208826 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-etc-cni-netd\") pod \"cilium-mcfj8\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " pod="kube-system/cilium-mcfj8" Apr 14 13:18:26.215326 kubelet[2702]: I0414 13:18:26.208858 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-lib-modules\") pod \"cilium-mcfj8\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " pod="kube-system/cilium-mcfj8" Apr 14 13:18:26.215326 kubelet[2702]: I0414 13:18:26.208869 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j25gm\" (UniqueName: \"kubernetes.io/projected/29979523-b9ef-4e95-ada7-13b2d8b91c40-kube-api-access-j25gm\") pod \"cilium-mcfj8\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " pod="kube-system/cilium-mcfj8" Apr 14 13:18:26.647269 containerd[1579]: time="2026-04-14T13:18:26.647013663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:18:26.648117 kubelet[2702]: E0414 13:18:26.648089 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:26.650941 containerd[1579]: time="2026-04-14T13:18:26.648247922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:18:26.650941 containerd[1579]: time="2026-04-14T13:18:26.648267378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:18:26.651762 containerd[1579]: time="2026-04-14T13:18:26.651635655Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:18:26.672158 kubelet[2702]: E0414 13:18:26.672026 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:26.672326 kubelet[2702]: E0414 13:18:26.672265 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:27.000391 kubelet[2702]: E0414 13:18:26.974005 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:27.218603 containerd[1579]: time="2026-04-14T13:18:27.202888059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-k28qz,Uid:d09ec03a-4296-44dc-b569-d7f061ca22b0,Namespace:kube-system,Attempt:0,}" Apr 14 13:18:27.809826 kubelet[2702]: E0414 13:18:27.809758 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:27.829394 containerd[1579]: time="2026-04-14T13:18:27.829354574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mcfj8,Uid:29979523-b9ef-4e95-ada7-13b2d8b91c40,Namespace:kube-system,Attempt:0,}" Apr 14 13:18:28.050658 kubelet[2702]: E0414 13:18:28.050622 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:28.382986 containerd[1579]: time="2026-04-14T13:18:28.378836264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:18:28.382986 containerd[1579]: time="2026-04-14T13:18:28.379065062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:18:28.382986 containerd[1579]: time="2026-04-14T13:18:28.379086658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:18:28.572659 containerd[1579]: time="2026-04-14T13:18:28.570884238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:18:28.830443 containerd[1579]: time="2026-04-14T13:18:28.830201386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f8gk5,Uid:a71a6e05-4a2c-4269-8c67-52ceb438356d,Namespace:kube-system,Attempt:0,} returns sandbox id \"342f808da14ab7d73a95e7b128579a674e02260cc8e7294bbc7e7dc19341d54c\"" Apr 14 13:18:29.547925 kubelet[2702]: E0414 13:18:29.547823 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:30.106506 systemd[1]: run-containerd-runc-k8s.io-b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9-runc.ad0HcI.mount: Deactivated successfully. Apr 14 13:18:30.156848 containerd[1579]: time="2026-04-14T13:18:30.119282466Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:18:30.156848 containerd[1579]: time="2026-04-14T13:18:30.125471285Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:18:30.156848 containerd[1579]: time="2026-04-14T13:18:30.125692201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:18:30.156848 containerd[1579]: time="2026-04-14T13:18:30.126117719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:18:30.658292 containerd[1579]: time="2026-04-14T13:18:30.658106085Z" level=info msg="CreateContainer within sandbox \"342f808da14ab7d73a95e7b128579a674e02260cc8e7294bbc7e7dc19341d54c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 14 13:18:30.758938 systemd[1]: run-containerd-runc-k8s.io-02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90-runc.4TMngd.mount: Deactivated successfully. Apr 14 13:18:31.352248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2443601836.mount: Deactivated successfully. Apr 14 13:18:31.380353 containerd[1579]: time="2026-04-14T13:18:31.365681485Z" level=info msg="CreateContainer within sandbox \"342f808da14ab7d73a95e7b128579a674e02260cc8e7294bbc7e7dc19341d54c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"65c3c09af16c48d9bed74736cda8e609ab0b56a62d214ba172a7fef04493f513\"" Apr 14 13:18:31.531302 containerd[1579]: time="2026-04-14T13:18:31.526038677Z" level=info msg="StartContainer for \"65c3c09af16c48d9bed74736cda8e609ab0b56a62d214ba172a7fef04493f513\"" Apr 14 13:18:31.531302 containerd[1579]: time="2026-04-14T13:18:31.530634114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mcfj8,Uid:29979523-b9ef-4e95-ada7-13b2d8b91c40,Namespace:kube-system,Attempt:0,} returns sandbox id \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\"" Apr 14 13:18:31.538025 kubelet[2702]: E0414 13:18:31.532205 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:31.728659 containerd[1579]: time="2026-04-14T13:18:31.727112864Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Apr 14 13:18:31.934444 containerd[1579]: time="2026-04-14T13:18:31.934300673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-k28qz,Uid:d09ec03a-4296-44dc-b569-d7f061ca22b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9\"" Apr 14 13:18:31.938004 kubelet[2702]: E0414 13:18:31.937956 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:33.053621 containerd[1579]: time="2026-04-14T13:18:33.053496053Z" level=info msg="StartContainer for \"65c3c09af16c48d9bed74736cda8e609ab0b56a62d214ba172a7fef04493f513\" returns successfully" Apr 14 13:18:33.213810 kubelet[2702]: E0414 13:18:33.211798 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:34.372923 kubelet[2702]: E0414 13:18:34.256280 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:34.617673 kubelet[2702]: E0414 13:18:34.610659 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:34.974980 kubelet[2702]: E0414 13:18:34.961207 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:34.999178 kubelet[2702]: I0414 13:18:34.998937 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f8gk5" podStartSLOduration=20.998843738 podStartE2EDuration="20.998843738s" podCreationTimestamp="2026-04-14 13:18:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 13:18:33.350257579 +0000 UTC m=+28.117374902" watchObservedRunningTime="2026-04-14 13:18:34.998843738 +0000 UTC m=+29.765961070" Apr 14 13:18:35.228485 kubelet[2702]: E0414 13:18:35.203324 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:35.930490 kubelet[2702]: E0414 13:18:35.920754 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:18:59.360615 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4187050823.mount: Deactivated successfully. Apr 14 13:19:22.032062 containerd[1579]: time="2026-04-14T13:19:22.031308900Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Apr 14 13:19:22.032062 containerd[1579]: time="2026-04-14T13:19:22.032106784Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:19:22.139870 containerd[1579]: time="2026-04-14T13:19:22.138438136Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:19:22.333868 containerd[1579]: time="2026-04-14T13:19:22.329209464Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 50.601992405s" Apr 14 13:19:22.333868 containerd[1579]: time="2026-04-14T13:19:22.329416839Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Apr 14 13:19:22.410644 containerd[1579]: time="2026-04-14T13:19:22.410240014Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Apr 14 13:19:22.693848 containerd[1579]: time="2026-04-14T13:19:22.688517269Z" level=info msg="CreateContainer within sandbox \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 14 13:19:22.905042 containerd[1579]: time="2026-04-14T13:19:22.904714938Z" level=info msg="CreateContainer within sandbox \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6ae92caa6a7a43bc732a7b205b970dd5e5a4d20947c6cee2d194744f4879c8cc\"" Apr 14 13:19:22.914646 containerd[1579]: time="2026-04-14T13:19:22.914229124Z" level=info msg="StartContainer for \"6ae92caa6a7a43bc732a7b205b970dd5e5a4d20947c6cee2d194744f4879c8cc\"" Apr 14 13:19:23.427309 containerd[1579]: time="2026-04-14T13:19:23.426326563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:19:23.427309 containerd[1579]: time="2026-04-14T13:19:23.426401852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:19:23.427309 containerd[1579]: time="2026-04-14T13:19:23.426431610Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:19:23.427309 containerd[1579]: time="2026-04-14T13:19:23.426964511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:19:24.296355 containerd[1579]: time="2026-04-14T13:19:24.276695099Z" level=info msg="StartContainer for \"6ae92caa6a7a43bc732a7b205b970dd5e5a4d20947c6cee2d194744f4879c8cc\" returns successfully" Apr 14 13:19:24.878507 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ae92caa6a7a43bc732a7b205b970dd5e5a4d20947c6cee2d194744f4879c8cc-rootfs.mount: Deactivated successfully. Apr 14 13:19:24.961053 kubelet[2702]: E0414 13:19:24.960905 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:19:25.076982 containerd[1579]: time="2026-04-14T13:19:25.069987623Z" level=info msg="shim disconnected" id=6ae92caa6a7a43bc732a7b205b970dd5e5a4d20947c6cee2d194744f4879c8cc namespace=k8s.io Apr 14 13:19:25.126510 containerd[1579]: time="2026-04-14T13:19:25.075873331Z" level=warning msg="cleaning up after shim disconnected" id=6ae92caa6a7a43bc732a7b205b970dd5e5a4d20947c6cee2d194744f4879c8cc namespace=k8s.io Apr 14 13:19:25.126510 containerd[1579]: time="2026-04-14T13:19:25.084105776Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:19:26.010918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount483501207.mount: Deactivated successfully. Apr 14 13:19:26.056167 kubelet[2702]: E0414 13:19:26.055997 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:19:26.310228 containerd[1579]: time="2026-04-14T13:19:26.308178800Z" level=info msg="CreateContainer within sandbox \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 14 13:19:26.586161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4171460032.mount: Deactivated successfully. Apr 14 13:19:26.884966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount344576980.mount: Deactivated successfully. Apr 14 13:19:27.045435 containerd[1579]: time="2026-04-14T13:19:27.044980487Z" level=info msg="CreateContainer within sandbox \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"424489f34fdce4b7743e40c745ea1195c9a2c8bd6fc0ec17242fed31716fae7e\"" Apr 14 13:19:27.133234 containerd[1579]: time="2026-04-14T13:19:27.133001028Z" level=info msg="StartContainer for \"424489f34fdce4b7743e40c745ea1195c9a2c8bd6fc0ec17242fed31716fae7e\"" Apr 14 13:19:27.562200 containerd[1579]: time="2026-04-14T13:19:27.561905452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:19:27.562200 containerd[1579]: time="2026-04-14T13:19:27.562215785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:19:27.562200 containerd[1579]: time="2026-04-14T13:19:27.562242088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:19:27.572730 containerd[1579]: time="2026-04-14T13:19:27.562406893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:19:28.287303 containerd[1579]: time="2026-04-14T13:19:28.286689165Z" level=info msg="StartContainer for \"424489f34fdce4b7743e40c745ea1195c9a2c8bd6fc0ec17242fed31716fae7e\" returns successfully" Apr 14 13:19:28.340244 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 14 13:19:28.340487 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 14 13:19:28.341255 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Apr 14 13:19:28.363013 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 13:19:28.551415 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 13:19:28.798528 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-424489f34fdce4b7743e40c745ea1195c9a2c8bd6fc0ec17242fed31716fae7e-rootfs.mount: Deactivated successfully. Apr 14 13:19:28.809715 containerd[1579]: time="2026-04-14T13:19:28.808261105Z" level=info msg="shim disconnected" id=424489f34fdce4b7743e40c745ea1195c9a2c8bd6fc0ec17242fed31716fae7e namespace=k8s.io Apr 14 13:19:28.809715 containerd[1579]: time="2026-04-14T13:19:28.808650674Z" level=warning msg="cleaning up after shim disconnected" id=424489f34fdce4b7743e40c745ea1195c9a2c8bd6fc0ec17242fed31716fae7e namespace=k8s.io Apr 14 13:19:28.809715 containerd[1579]: time="2026-04-14T13:19:28.808664084Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:19:29.168926 kubelet[2702]: E0414 13:19:29.167928 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:19:29.439437 containerd[1579]: time="2026-04-14T13:19:29.428030323Z" level=info msg="CreateContainer within sandbox \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 14 13:19:29.604080 containerd[1579]: time="2026-04-14T13:19:29.603924263Z" level=info msg="CreateContainer within sandbox \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"88227698bc8072666b0c461164a87b241edcc52a7315d816154e503d3be81884\"" Apr 14 13:19:29.608575 containerd[1579]: time="2026-04-14T13:19:29.606637060Z" level=info msg="StartContainer for \"88227698bc8072666b0c461164a87b241edcc52a7315d816154e503d3be81884\"" Apr 14 13:19:29.679626 containerd[1579]: time="2026-04-14T13:19:29.676977245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:19:29.679626 containerd[1579]: time="2026-04-14T13:19:29.677103876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:19:29.679626 containerd[1579]: time="2026-04-14T13:19:29.677114942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:19:29.681337 containerd[1579]: time="2026-04-14T13:19:29.681227346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:19:30.049055 containerd[1579]: time="2026-04-14T13:19:30.048800679Z" level=info msg="StartContainer for \"88227698bc8072666b0c461164a87b241edcc52a7315d816154e503d3be81884\" returns successfully" Apr 14 13:19:30.812229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88227698bc8072666b0c461164a87b241edcc52a7315d816154e503d3be81884-rootfs.mount: Deactivated successfully. Apr 14 13:19:30.857479 containerd[1579]: time="2026-04-14T13:19:30.857070653Z" level=info msg="shim disconnected" id=88227698bc8072666b0c461164a87b241edcc52a7315d816154e503d3be81884 namespace=k8s.io Apr 14 13:19:30.857479 containerd[1579]: time="2026-04-14T13:19:30.857499751Z" level=warning msg="cleaning up after shim disconnected" id=88227698bc8072666b0c461164a87b241edcc52a7315d816154e503d3be81884 namespace=k8s.io Apr 14 13:19:30.860649 containerd[1579]: time="2026-04-14T13:19:30.857514625Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:19:31.182021 kubelet[2702]: E0414 13:19:31.180642 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:19:32.243671 kubelet[2702]: E0414 13:19:32.240997 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:19:34.155153 containerd[1579]: time="2026-04-14T13:19:34.154584839Z" level=info msg="CreateContainer within sandbox \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 14 13:19:34.819438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3409382677.mount: Deactivated successfully. Apr 14 13:19:35.118375 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1821189549.mount: Deactivated successfully. Apr 14 13:19:35.139589 containerd[1579]: time="2026-04-14T13:19:35.139304406Z" level=info msg="CreateContainer within sandbox \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8e208f3fa3ea14c6f3cf99273c6e5fa885b27ea1f81367fe4db514a2cffba3dd\"" Apr 14 13:19:35.151284 containerd[1579]: time="2026-04-14T13:19:35.149963296Z" level=info msg="StartContainer for \"8e208f3fa3ea14c6f3cf99273c6e5fa885b27ea1f81367fe4db514a2cffba3dd\"" Apr 14 13:19:35.761197 containerd[1579]: time="2026-04-14T13:19:35.756980933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:19:35.761197 containerd[1579]: time="2026-04-14T13:19:35.757044050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:19:35.761197 containerd[1579]: time="2026-04-14T13:19:35.757056558Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:19:35.761197 containerd[1579]: time="2026-04-14T13:19:35.757216864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:19:37.207648 kubelet[2702]: E0414 13:19:37.207062 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.156s" Apr 14 13:19:37.245737 containerd[1579]: time="2026-04-14T13:19:37.245481541Z" level=info msg="StartContainer for \"8e208f3fa3ea14c6f3cf99273c6e5fa885b27ea1f81367fe4db514a2cffba3dd\" returns successfully" Apr 14 13:19:38.160346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e208f3fa3ea14c6f3cf99273c6e5fa885b27ea1f81367fe4db514a2cffba3dd-rootfs.mount: Deactivated successfully. Apr 14 13:19:38.219174 containerd[1579]: time="2026-04-14T13:19:38.193321839Z" level=info msg="shim disconnected" id=8e208f3fa3ea14c6f3cf99273c6e5fa885b27ea1f81367fe4db514a2cffba3dd namespace=k8s.io Apr 14 13:19:38.221833 containerd[1579]: time="2026-04-14T13:19:38.221593138Z" level=warning msg="cleaning up after shim disconnected" id=8e208f3fa3ea14c6f3cf99273c6e5fa885b27ea1f81367fe4db514a2cffba3dd namespace=k8s.io Apr 14 13:19:38.221833 containerd[1579]: time="2026-04-14T13:19:38.221664563Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:19:38.405697 containerd[1579]: time="2026-04-14T13:19:38.404392665Z" level=warning msg="cleanup warnings time=\"2026-04-14T13:19:38Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 14 13:19:38.760207 kubelet[2702]: E0414 13:19:38.759689 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:19:40.702765 containerd[1579]: time="2026-04-14T13:19:40.702626889Z" level=info msg="CreateContainer within sandbox \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 14 13:19:40.946564 containerd[1579]: time="2026-04-14T13:19:40.946411015Z" level=info msg="CreateContainer within sandbox \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc\"" Apr 14 13:19:40.959443 containerd[1579]: time="2026-04-14T13:19:40.949239970Z" level=info msg="StartContainer for \"396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc\"" Apr 14 13:19:41.314678 containerd[1579]: time="2026-04-14T13:19:41.313058531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:19:41.314678 containerd[1579]: time="2026-04-14T13:19:41.313255189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:19:41.314678 containerd[1579]: time="2026-04-14T13:19:41.313277546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:19:41.314678 containerd[1579]: time="2026-04-14T13:19:41.313389728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:19:42.187985 containerd[1579]: time="2026-04-14T13:19:42.187596041Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:19:42.289162 containerd[1579]: time="2026-04-14T13:19:42.262501064Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Apr 14 13:19:42.347595 containerd[1579]: time="2026-04-14T13:19:42.345158988Z" level=info msg="StartContainer for \"396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc\" returns successfully" Apr 14 13:19:42.348350 containerd[1579]: time="2026-04-14T13:19:42.348246973Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 13:19:42.456329 containerd[1579]: time="2026-04-14T13:19:42.446159395Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 20.035753596s" Apr 14 13:19:42.456329 containerd[1579]: time="2026-04-14T13:19:42.447609491Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Apr 14 13:19:44.105527 containerd[1579]: time="2026-04-14T13:19:44.089121991Z" level=info msg="CreateContainer within sandbox \"b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Apr 14 13:19:45.160657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount866419884.mount: Deactivated successfully. Apr 14 13:19:45.937852 containerd[1579]: time="2026-04-14T13:19:45.937418644Z" level=info msg="CreateContainer within sandbox \"b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443\"" Apr 14 13:19:46.766118 containerd[1579]: time="2026-04-14T13:19:46.761190698Z" level=info msg="StartContainer for \"67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443\"" Apr 14 13:19:49.368391 kubelet[2702]: E0414 13:19:49.365178 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.25s" Apr 14 13:19:52.840833 containerd[1579]: time="2026-04-14T13:19:52.818684562Z" level=info msg="StartContainer for \"67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443\" returns successfully" Apr 14 13:19:53.438347 kubelet[2702]: I0414 13:19:53.438005 2702 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 14 13:19:53.672108 kubelet[2702]: E0414 13:19:53.671996 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.299s" Apr 14 13:19:53.673987 kubelet[2702]: E0414 13:19:53.673919 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:19:53.713182 kubelet[2702]: E0414 13:19:53.712694 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:19:54.343290 kubelet[2702]: E0414 13:19:54.341646 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:19:54.418620 kubelet[2702]: I0414 13:19:54.417756 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/35ec3304-c011-45a9-8315-a39d73673d17-config-volume\") pod \"coredns-674b8bbfcf-jc82b\" (UID: \"35ec3304-c011-45a9-8315-a39d73673d17\") " pod="kube-system/coredns-674b8bbfcf-jc82b" Apr 14 13:19:54.418620 kubelet[2702]: I0414 13:19:54.417802 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jrjb\" (UniqueName: \"kubernetes.io/projected/67191550-1737-456e-a429-02a63e9c7256-kube-api-access-4jrjb\") pod \"coredns-674b8bbfcf-hld84\" (UID: \"67191550-1737-456e-a429-02a63e9c7256\") " pod="kube-system/coredns-674b8bbfcf-hld84" Apr 14 13:19:54.418620 kubelet[2702]: I0414 13:19:54.417840 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67191550-1737-456e-a429-02a63e9c7256-config-volume\") pod \"coredns-674b8bbfcf-hld84\" (UID: \"67191550-1737-456e-a429-02a63e9c7256\") " pod="kube-system/coredns-674b8bbfcf-hld84" Apr 14 13:19:54.418620 kubelet[2702]: I0414 13:19:54.417896 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpb9f\" (UniqueName: \"kubernetes.io/projected/35ec3304-c011-45a9-8315-a39d73673d17-kube-api-access-dpb9f\") pod \"coredns-674b8bbfcf-jc82b\" (UID: \"35ec3304-c011-45a9-8315-a39d73673d17\") " pod="kube-system/coredns-674b8bbfcf-jc82b" Apr 14 13:19:54.846224 kubelet[2702]: I0414 13:19:54.846090 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-k28qz" podStartSLOduration=21.277103969 podStartE2EDuration="1m31.845988099s" podCreationTimestamp="2026-04-14 13:18:23 +0000 UTC" firstStartedPulling="2026-04-14 13:18:31.943397329 +0000 UTC m=+26.710514648" lastFinishedPulling="2026-04-14 13:19:42.512281459 +0000 UTC m=+97.279398778" observedRunningTime="2026-04-14 13:19:54.845832578 +0000 UTC m=+109.612949906" watchObservedRunningTime="2026-04-14 13:19:54.845988099 +0000 UTC m=+109.613105430" Apr 14 13:19:54.871380 kubelet[2702]: E0414 13:19:54.870362 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:19:54.934222 containerd[1579]: time="2026-04-14T13:19:54.931313950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hld84,Uid:67191550-1737-456e-a429-02a63e9c7256,Namespace:kube-system,Attempt:0,}" Apr 14 13:19:55.096026 kubelet[2702]: E0414 13:19:55.086487 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:19:55.116210 containerd[1579]: time="2026-04-14T13:19:55.106522389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jc82b,Uid:35ec3304-c011-45a9-8315-a39d73673d17,Namespace:kube-system,Attempt:0,}" Apr 14 13:19:55.642103 kubelet[2702]: E0414 13:19:55.641687 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:19:55.783578 kubelet[2702]: E0414 13:19:55.762324 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:19:56.775713 kubelet[2702]: I0414 13:19:56.774456 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mcfj8" podStartSLOduration=42.970278433 podStartE2EDuration="1m33.772620799s" podCreationTimestamp="2026-04-14 13:18:23 +0000 UTC" firstStartedPulling="2026-04-14 13:18:31.573681438 +0000 UTC m=+26.340798750" lastFinishedPulling="2026-04-14 13:19:22.376023788 +0000 UTC m=+77.143141116" observedRunningTime="2026-04-14 13:19:56.683137708 +0000 UTC m=+111.450255027" watchObservedRunningTime="2026-04-14 13:19:56.772620799 +0000 UTC m=+111.539738127" Apr 14 13:19:57.061120 kubelet[2702]: E0414 13:19:57.051135 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:19:58.049046 kubelet[2702]: E0414 13:19:58.044169 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:19:59.019727 kubelet[2702]: E0414 13:19:59.015739 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:20:01.956106 systemd-networkd[1253]: cilium_host: Link UP Apr 14 13:20:01.956699 systemd-networkd[1253]: cilium_net: Link UP Apr 14 13:20:01.957398 systemd-networkd[1253]: cilium_net: Gained carrier Apr 14 13:20:01.958046 systemd-networkd[1253]: cilium_host: Gained carrier Apr 14 13:20:02.008656 systemd-networkd[1253]: cilium_net: Gained IPv6LL Apr 14 13:20:02.620042 systemd-networkd[1253]: cilium_host: Gained IPv6LL Apr 14 13:20:02.666183 systemd-networkd[1253]: cilium_vxlan: Link UP Apr 14 13:20:02.666203 systemd-networkd[1253]: cilium_vxlan: Gained carrier Apr 14 13:20:03.948821 kubelet[2702]: E0414 13:20:03.946471 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:20:04.346235 systemd-networkd[1253]: cilium_vxlan: Gained IPv6LL Apr 14 13:20:04.811761 kernel: NET: Registered PF_ALG protocol family Apr 14 13:20:16.351586 kubelet[2702]: E0414 13:20:16.351449 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:20:19.102633 kubelet[2702]: E0414 13:20:19.102505 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.16s" Apr 14 13:20:27.007445 kubelet[2702]: E0414 13:20:27.000052 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.011s" Apr 14 13:20:30.586398 kubelet[2702]: E0414 13:20:30.379275 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.417s" Apr 14 13:20:33.051763 kubelet[2702]: E0414 13:20:33.050112 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.375s" Apr 14 13:20:35.033522 kubelet[2702]: E0414 13:20:35.028514 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.977s" Apr 14 13:20:38.026716 kubelet[2702]: E0414 13:20:38.024227 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.98s" Apr 14 13:20:44.052422 kubelet[2702]: E0414 13:20:44.047809 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.111s" Apr 14 13:20:50.352160 systemd-networkd[1253]: lxc_health: Link UP Apr 14 13:20:50.368006 systemd-networkd[1253]: lxc_health: Gained carrier Apr 14 13:20:51.775012 systemd-networkd[1253]: lxc_health: Gained IPv6LL Apr 14 13:20:56.860223 kubelet[2702]: E0414 13:20:56.854690 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.598s" Apr 14 13:20:56.919864 kubelet[2702]: E0414 13:20:56.919800 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:20:56.922824 kubelet[2702]: E0414 13:20:56.922100 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:20:57.473302 containerd[1579]: time="2026-04-14T13:20:57.472050397Z" level=error msg="Failed to destroy network for sandbox \"103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5\"" error="plugin type=\"cilium-cni\" name=\"cilium\" failed (delete): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Apr 14 13:20:57.593472 containerd[1579]: time="2026-04-14T13:20:57.587120612Z" level=error msg="encountered an error cleaning up failed sandbox \"103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"cilium-cni\" name=\"cilium\" failed (delete): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Apr 14 13:20:57.593472 containerd[1579]: time="2026-04-14T13:20:57.587294374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hld84,Uid:67191550-1737-456e-a429-02a63e9c7256,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Apr 14 13:20:57.788170 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5-shm.mount: Deactivated successfully. Apr 14 13:20:57.881013 kubelet[2702]: E0414 13:20:57.874100 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:20:57.881013 kubelet[2702]: E0414 13:20:57.874477 2702 log.go:32] "RunPodSandbox from runtime service failed" err=< Apr 14 13:20:57.881013 kubelet[2702]: rpc error: code = Unknown desc = failed to setup network for sandbox "103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Apr 14 13:20:57.881013 kubelet[2702]: Is the agent running? Apr 14 13:20:57.881013 kubelet[2702]: > Apr 14 13:20:57.881013 kubelet[2702]: E0414 13:20:57.874506 2702 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=< Apr 14 13:20:57.881013 kubelet[2702]: rpc error: code = Unknown desc = failed to setup network for sandbox "103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Apr 14 13:20:57.881013 kubelet[2702]: Is the agent running? Apr 14 13:20:57.881013 kubelet[2702]: > pod="kube-system/coredns-674b8bbfcf-hld84" Apr 14 13:20:57.881013 kubelet[2702]: E0414 13:20:57.874525 2702 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err=< Apr 14 13:20:57.881013 kubelet[2702]: rpc error: code = Unknown desc = failed to setup network for sandbox "103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Apr 14 13:20:57.881013 kubelet[2702]: Is the agent running? Apr 14 13:20:57.881013 kubelet[2702]: > pod="kube-system/coredns-674b8bbfcf-hld84" Apr 14 13:20:58.378142 kubelet[2702]: E0414 13:20:58.377838 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hld84_kube-system(67191550-1737-456e-a429-02a63e9c7256)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hld84_kube-system(67191550-1737-456e-a429-02a63e9c7256)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \\\"http:///var/run/cilium/cilium.sock/v1/config\\\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\\nIs the agent running?\"" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:20:58.508874 containerd[1579]: time="2026-04-14T13:20:58.508094998Z" level=error msg="Failed to destroy network for sandbox \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\"" error="plugin type=\"cilium-cni\" name=\"cilium\" failed (delete): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Apr 14 13:20:58.730452 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70-shm.mount: Deactivated successfully. Apr 14 13:20:58.920789 containerd[1579]: time="2026-04-14T13:20:58.920708142Z" level=error msg="encountered an error cleaning up failed sandbox \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"cilium-cni\" name=\"cilium\" failed (delete): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Apr 14 13:20:58.921081 containerd[1579]: time="2026-04-14T13:20:58.920895546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jc82b,Uid:35ec3304-c011-45a9-8315-a39d73673d17,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Apr 14 13:20:58.940703 kubelet[2702]: E0414 13:20:58.935268 2702 log.go:32] "RunPodSandbox from runtime service failed" err=< Apr 14 13:20:58.940703 kubelet[2702]: rpc error: code = Unknown desc = failed to setup network for sandbox "f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Apr 14 13:20:58.940703 kubelet[2702]: Is the agent running? Apr 14 13:20:58.940703 kubelet[2702]: > Apr 14 13:20:59.419185 kubelet[2702]: E0414 13:20:59.416723 2702 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=< Apr 14 13:20:59.419185 kubelet[2702]: rpc error: code = Unknown desc = failed to setup network for sandbox "f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Apr 14 13:20:59.419185 kubelet[2702]: Is the agent running? Apr 14 13:20:59.419185 kubelet[2702]: > pod="kube-system/coredns-674b8bbfcf-jc82b" Apr 14 13:20:59.694222 kubelet[2702]: E0414 13:20:59.670666 2702 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err=< Apr 14 13:20:59.694222 kubelet[2702]: rpc error: code = Unknown desc = failed to setup network for sandbox "f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Apr 14 13:20:59.694222 kubelet[2702]: Is the agent running? Apr 14 13:20:59.694222 kubelet[2702]: > pod="kube-system/coredns-674b8bbfcf-jc82b" Apr 14 13:21:00.814024 kubelet[2702]: E0414 13:21:00.672311 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jc82b_kube-system(35ec3304-c011-45a9-8315-a39d73673d17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jc82b_kube-system(35ec3304-c011-45a9-8315-a39d73673d17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \\\"http:///var/run/cilium/cilium.sock/v1/config\\\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\\nIs the agent running?\"" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:21:06.077343 kubelet[2702]: E0414 13:21:06.073153 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.042s" Apr 14 13:21:08.033718 kubelet[2702]: E0414 13:21:08.033339 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:21:13.664861 kubelet[2702]: E0414 13:21:13.663369 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.427s" Apr 14 13:21:13.836138 kubelet[2702]: I0414 13:21:13.833885 2702 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70" Apr 14 13:21:13.851299 kubelet[2702]: I0414 13:21:13.848836 2702 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5" Apr 14 13:21:15.506303 kubelet[2702]: E0414 13:21:15.273497 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:21:15.856490 containerd[1579]: time="2026-04-14T13:21:15.853804985Z" level=info msg="StopPodSandbox for \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\"" Apr 14 13:21:15.867710 containerd[1579]: time="2026-04-14T13:21:15.867163191Z" level=info msg="Ensure that sandbox f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70 in task-service has been cleanup successfully" Apr 14 13:21:15.959365 kubelet[2702]: E0414 13:21:15.955967 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:21:16.086036 containerd[1579]: time="2026-04-14T13:21:16.007793849Z" level=info msg="StopPodSandbox for \"103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5\"" Apr 14 13:21:16.666523 containerd[1579]: time="2026-04-14T13:21:16.649351564Z" level=info msg="Ensure that sandbox 103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5 in task-service has been cleanup successfully" Apr 14 13:21:24.632191 containerd[1579]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 14 13:21:24.844865 containerd[1579]: time="2026-04-14T13:21:24.816154950Z" level=info msg="TearDown network for sandbox \"103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5\" successfully" Apr 14 13:21:25.001864 containerd[1579]: time="2026-04-14T13:21:24.848770471Z" level=info msg="StopPodSandbox for \"103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5\" returns successfully" Apr 14 13:21:25.017196 systemd[1]: run-netns-cni\x2d76378559\x2d1d78\x2d9917\x2d2953\x2d5a3449f9cc7e.mount: Deactivated successfully. Apr 14 13:21:25.165020 containerd[1579]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 14 13:21:25.422309 containerd[1579]: time="2026-04-14T13:21:25.389690212Z" level=info msg="TearDown network for sandbox \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\" successfully" Apr 14 13:21:25.422309 containerd[1579]: time="2026-04-14T13:21:25.390004539Z" level=info msg="StopPodSandbox for \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\" returns successfully" Apr 14 13:21:25.633184 systemd[1]: run-netns-cni\x2da4970268\x2de133\x2d0513\x2d910c\x2d48fd2d297453.mount: Deactivated successfully. Apr 14 13:21:32.145746 kubelet[2702]: E0414 13:21:32.071261 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:21:33.353756 kubelet[2702]: E0414 13:21:33.294365 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:21:37.045413 containerd[1579]: time="2026-04-14T13:21:37.045033020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hld84,Uid:67191550-1737-456e-a429-02a63e9c7256,Namespace:kube-system,Attempt:1,}" Apr 14 13:21:56.825983 kubelet[2702]: E0414 13:21:56.820495 2702 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:47242->127.0.0.1:35173: write tcp 127.0.0.1:47242->127.0.0.1:35173: write: broken pipe Apr 14 13:22:06.217027 containerd[1579]: time="2026-04-14T13:22:05.876982725Z" level=error msg="post event" error="context deadline exceeded" Apr 14 13:22:06.655772 containerd[1579]: time="2026-04-14T13:22:06.638049815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jc82b,Uid:35ec3304-c011-45a9-8315-a39d73673d17,Namespace:kube-system,Attempt:1,}" Apr 14 13:22:08.777325 containerd[1579]: time="2026-04-14T13:22:08.772776613Z" level=error msg="get state for d968e5901e139a31f5fe2e10bab05bfc1b7422ea305eb7aca01f64228d15b0f5" error="context deadline exceeded: unknown" Apr 14 13:22:09.578185 containerd[1579]: time="2026-04-14T13:22:08.772513220Z" level=error msg="ttrpc: received message on inactive stream" stream=45 Apr 14 13:22:10.371123 containerd[1579]: time="2026-04-14T13:22:09.577483596Z" level=warning msg="unknown status" status=0 Apr 14 13:22:10.945217 containerd[1579]: time="2026-04-14T13:22:10.170226671Z" level=error msg="ttrpc: received message on inactive stream" stream=9 Apr 14 13:22:12.779818 kubelet[2702]: E0414 13:21:59.640040 2702 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 14 13:22:13.475360 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d968e5901e139a31f5fe2e10bab05bfc1b7422ea305eb7aca01f64228d15b0f5-rootfs.mount: Deactivated successfully. Apr 14 13:22:14.295110 containerd[1579]: time="2026-04-14T13:22:14.231237641Z" level=info msg="shim disconnected" id=d968e5901e139a31f5fe2e10bab05bfc1b7422ea305eb7aca01f64228d15b0f5 namespace=k8s.io Apr 14 13:22:14.514442 containerd[1579]: time="2026-04-14T13:22:14.503274835Z" level=warning msg="cleaning up after shim disconnected" id=d968e5901e139a31f5fe2e10bab05bfc1b7422ea305eb7aca01f64228d15b0f5 namespace=k8s.io Apr 14 13:22:14.632526 containerd[1579]: time="2026-04-14T13:22:14.590823306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:22:18.200102 containerd[1579]: time="2026-04-14T13:22:18.193931350Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=d968e5901e139a31f5fe2e10bab05bfc1b7422ea305eb7aca01f64228d15b0f5 Apr 14 13:22:18.349000 containerd[1579]: time="2026-04-14T13:22:18.345523284Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id d968e5901e139a31f5fe2e10bab05bfc1b7422ea305eb7aca01f64228d15b0f5 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/d968e5901e139a31f5fe2e10bab05bfc1b7422ea305eb7aca01f64228d15b0f5 delete" error="exit status 1" namespace=k8s.io Apr 14 13:22:18.579198 containerd[1579]: time="2026-04-14T13:22:18.387084680Z" level=warning msg="failed to clean up after shim disconnected" error="io.containerd.runc.v2: getwd: no such file or directory: exit status 1" id=d968e5901e139a31f5fe2e10bab05bfc1b7422ea305eb7aca01f64228d15b0f5 namespace=k8s.io Apr 14 13:22:18.845063 containerd[1579]: time="2026-04-14T13:22:18.787373152Z" level=error msg="failed to handle container TaskExit event container_id:\"2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0\" id:\"2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0\" pid:2520 exit_status:1 exited_at:{seconds:1776172925 nanos:165512948}" error="failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 13:22:18.843325 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0-rootfs.mount: Deactivated successfully. Apr 14 13:22:19.040195 containerd[1579]: time="2026-04-14T13:22:18.825462547Z" level=error msg="ttrpc: received message on inactive stream" stream=55 Apr 14 13:22:21.296178 containerd[1579]: time="2026-04-14T13:22:21.278459365Z" level=info msg="TaskExit event container_id:\"2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0\" id:\"2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0\" pid:2520 exit_status:1 exited_at:{seconds:1776172925 nanos:165512948}" Apr 14 13:22:30.689334 systemd-networkd[1253]: lxc19d9fd7d5516: Link UP Apr 14 13:22:31.939798 containerd[1579]: time="2026-04-14T13:22:31.355244153Z" level=error msg="get state for 2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0" error="context deadline exceeded: unknown" Apr 14 13:22:32.171131 containerd[1579]: time="2026-04-14T13:22:32.074221722Z" level=error msg="ttrpc: received message on inactive stream" stream=65 Apr 14 13:22:32.444794 containerd[1579]: time="2026-04-14T13:22:32.309221982Z" level=warning msg="unknown status" status=0 Apr 14 13:22:32.973221 kernel: eth0: renamed from tmp0edd7 Apr 14 13:22:35.419012 containerd[1579]: time="2026-04-14T13:22:35.415074357Z" level=error msg="Failed to handle backOff event container_id:\"2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0\" id:\"2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0\" pid:2520 exit_status:1 exited_at:{seconds:1776172925 nanos:165512948} for 2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 13:22:35.555417 systemd-networkd[1253]: lxc19d9fd7d5516: Gained carrier Apr 14 13:22:36.067832 containerd[1579]: time="2026-04-14T13:22:36.055156279Z" level=error msg="ttrpc: received message on inactive stream" stream=67 Apr 14 13:22:37.546803 systemd-networkd[1253]: lxc19d9fd7d5516: Gained IPv6LL Apr 14 13:22:38.744114 containerd[1579]: time="2026-04-14T13:22:38.743697458Z" level=info msg="TaskExit event container_id:\"2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0\" id:\"2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0\" pid:2520 exit_status:1 exited_at:{seconds:1776172925 nanos:165512948}" Apr 14 13:22:49.718043 containerd[1579]: time="2026-04-14T13:22:49.646495019Z" level=error msg="ttrpc: received message on inactive stream" stream=79 Apr 14 13:22:49.942402 containerd[1579]: time="2026-04-14T13:22:49.541470368Z" level=error msg="Failed to handle backOff event container_id:\"2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0\" id:\"2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0\" pid:2520 exit_status:1 exited_at:{seconds:1776172925 nanos:165512948} for 2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded: unknown" Apr 14 13:22:51.786928 kubelet[2702]: E0414 13:22:49.443078 2702 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 14 13:22:52.667723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443-rootfs.mount: Deactivated successfully. Apr 14 13:22:53.239176 containerd[1579]: time="2026-04-14T13:22:53.238695333Z" level=info msg="shim disconnected" id=67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443 namespace=k8s.io Apr 14 13:22:53.263427 containerd[1579]: time="2026-04-14T13:22:53.263384889Z" level=warning msg="cleaning up after shim disconnected" id=67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443 namespace=k8s.io Apr 14 13:22:53.331119 containerd[1579]: time="2026-04-14T13:22:53.290214518Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:22:55.072152 containerd[1579]: time="2026-04-14T13:22:55.058471957Z" level=error msg="failed to delete shim" error="1 error occurred:\n\t* close wait error: context deadline exceeded\n\n" id=67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443 Apr 14 13:22:55.160153 containerd[1579]: time="2026-04-14T13:22:55.158673605Z" level=info msg="TaskExit event container_id:\"2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0\" id:\"2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0\" pid:2520 exit_status:1 exited_at:{seconds:1776172925 nanos:165512948}" Apr 14 13:22:56.187876 containerd[1579]: time="2026-04-14T13:22:56.187377348Z" level=error msg="failed to delete" cmd="/usr/bin/containerd-shim-runc-v2 -namespace k8s.io -address /run/containerd/containerd.sock -publish-binary /usr/bin/containerd -id 67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443 -bundle /run/containerd/io.containerd.runtime.v2.task/k8s.io/67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443 delete" error="exit status 1" namespace=k8s.io Apr 14 13:22:56.251189 containerd[1579]: time="2026-04-14T13:22:56.204652846Z" level=warning msg="failed to clean up after shim disconnected" error="io.containerd.runc.v2: getwd: no such file or directory: exit status 1" id=67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443 namespace=k8s.io Apr 14 13:23:00.253357 containerd[1579]: time="2026-04-14T13:23:00.252382036Z" level=info msg="shim disconnected" id=2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0 namespace=k8s.io Apr 14 13:23:00.480266 containerd[1579]: time="2026-04-14T13:23:00.466306300Z" level=warning msg="cleaning up after shim disconnected" id=2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0 namespace=k8s.io Apr 14 13:23:00.480266 containerd[1579]: time="2026-04-14T13:23:00.475093765Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:23:03.284753 systemd-networkd[1253]: lxc190b3a9e200b: Link UP Apr 14 13:23:03.593777 kernel: eth0: renamed from tmp9ae34 Apr 14 13:23:04.391359 systemd-networkd[1253]: lxc190b3a9e200b: Gained carrier Apr 14 13:23:04.401931 kubelet[2702]: E0414 13:23:04.401776 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1m50.553s" Apr 14 13:23:05.796240 systemd-networkd[1253]: lxc190b3a9e200b: Gained IPv6LL Apr 14 13:23:06.642937 kubelet[2702]: E0414 13:23:06.642505 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:07.290941 kubelet[2702]: E0414 13:23:07.279329 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:07.346027 kubelet[2702]: E0414 13:23:07.345199 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:07.690049 containerd[1579]: time="2026-04-14T13:23:07.682071778Z" level=info msg="StopContainer for \"67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443\" with timeout 30 (s)" Apr 14 13:23:07.840963 containerd[1579]: time="2026-04-14T13:23:07.836953684Z" level=info msg="Container to stop \"67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 13:23:07.996864 containerd[1579]: time="2026-04-14T13:23:07.971574847Z" level=info msg="StopContainer for \"67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443\" returns successfully" Apr 14 13:23:08.154044 kubelet[2702]: E0414 13:23:08.153391 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:10.427131 kubelet[2702]: E0414 13:23:10.426796 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:11.149808 kubelet[2702]: E0414 13:23:11.145882 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:11.171178 kubelet[2702]: E0414 13:23:11.140366 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.738s" Apr 14 13:23:17.711309 containerd[1579]: time="2026-04-14T13:23:17.710712057Z" level=info msg="CreateContainer within sandbox \"b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Apr 14 13:23:21.139732 containerd[1579]: time="2026-04-14T13:23:21.139567723Z" level=info msg="CreateContainer within sandbox \"b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc\"" Apr 14 13:23:23.055999 containerd[1579]: time="2026-04-14T13:23:22.936779259Z" level=info msg="StartContainer for \"c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc\"" Apr 14 13:23:24.272215 kubelet[2702]: E0414 13:23:23.442342 2702 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod39798d73a6894e44ae801eb773bf9a39/2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0: task 2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0 not found Apr 14 13:23:34.032503 kubelet[2702]: I0414 13:23:34.032229 2702 scope.go:117] "RemoveContainer" containerID="2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0" Apr 14 13:23:34.438796 containerd[1579]: time="2026-04-14T13:23:34.437174761Z" level=info msg="StartContainer for \"c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc\" returns successfully" Apr 14 13:23:34.906624 kubelet[2702]: E0414 13:23:34.889704 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:23:42.628846 kubelet[2702]: E0414 13:23:42.628775 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="29.342s" Apr 14 13:23:44.350185 containerd[1579]: time="2026-04-14T13:23:44.346607924Z" level=info msg="CreateContainer within sandbox \"a6985a73340d46b9950ef1958f21553514054108fd318ec6e2841ce6f50de949\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 14 13:23:45.998106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1716430529.mount: Deactivated successfully. Apr 14 13:23:46.783837 containerd[1579]: time="2026-04-14T13:23:46.783165411Z" level=info msg="CreateContainer within sandbox \"a6985a73340d46b9950ef1958f21553514054108fd318ec6e2841ce6f50de949\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"685478545ecea7e061c2288e672aca9ee4425b5979c3d84e667fe155027fecbb\"" Apr 14 13:23:53.680396 systemd[1]: run-containerd-runc-k8s.io-396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc-runc.HPAGWQ.mount: Deactivated successfully. Apr 14 13:23:57.977026 update_engine[1564]: I20260414 13:23:57.973148 1564 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 14 13:23:58.044399 update_engine[1564]: I20260414 13:23:58.018015 1564 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 14 13:23:58.044399 update_engine[1564]: I20260414 13:23:58.039130 1564 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 14 13:23:58.198903 update_engine[1564]: I20260414 13:23:58.188792 1564 omaha_request_params.cc:62] Current group set to lts Apr 14 13:23:58.223344 update_engine[1564]: I20260414 13:23:58.205461 1564 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 14 13:23:58.223344 update_engine[1564]: I20260414 13:23:58.205727 1564 update_attempter.cc:643] Scheduling an action processor start. Apr 14 13:23:58.223344 update_engine[1564]: I20260414 13:23:58.205806 1564 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 14 13:23:58.223344 update_engine[1564]: I20260414 13:23:58.218410 1564 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 14 13:23:58.463368 update_engine[1564]: I20260414 13:23:58.232877 1564 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 14 13:23:58.463368 update_engine[1564]: I20260414 13:23:58.233084 1564 omaha_request_action.cc:272] Request: Apr 14 13:23:58.463368 update_engine[1564]: Apr 14 13:23:58.463368 update_engine[1564]: Apr 14 13:23:58.463368 update_engine[1564]: Apr 14 13:23:58.463368 update_engine[1564]: Apr 14 13:23:58.463368 update_engine[1564]: Apr 14 13:23:58.463368 update_engine[1564]: Apr 14 13:23:58.463368 update_engine[1564]: Apr 14 13:23:58.463368 update_engine[1564]: Apr 14 13:23:58.463368 update_engine[1564]: I20260414 13:23:58.233095 1564 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 13:23:58.472859 update_engine[1564]: I20260414 13:23:58.469398 1564 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 13:23:58.701727 locksmithd[1616]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 14 13:23:58.729758 update_engine[1564]: I20260414 13:23:58.729614 1564 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 13:23:58.755058 update_engine[1564]: E20260414 13:23:58.750344 1564 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 13:23:58.855059 update_engine[1564]: I20260414 13:23:58.850731 1564 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 14 13:24:05.925298 kubelet[2702]: E0414 13:24:05.919764 2702 manager.go:1116] Failed to create existing container: /kubepods/burstable/podebf8e820819e4b80bc03d078b9ba80f5/d968e5901e139a31f5fe2e10bab05bfc1b7422ea305eb7aca01f64228d15b0f5: task d968e5901e139a31f5fe2e10bab05bfc1b7422ea305eb7aca01f64228d15b0f5 not found Apr 14 13:24:06.652184 containerd[1579]: time="2026-04-14T13:24:06.651866893Z" level=info msg="StartContainer for \"685478545ecea7e061c2288e672aca9ee4425b5979c3d84e667fe155027fecbb\"" Apr 14 13:24:08.856124 update_engine[1564]: I20260414 13:24:08.792399 1564 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 13:24:09.182386 update_engine[1564]: I20260414 13:24:09.018990 1564 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 13:24:09.452589 update_engine[1564]: I20260414 13:24:09.333401 1564 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 13:24:09.443151 systemd-networkd[1253]: lxc19d9fd7d5516: Link DOWN Apr 14 13:24:09.646455 update_engine[1564]: E20260414 13:24:09.453133 1564 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 13:24:09.646455 update_engine[1564]: I20260414 13:24:09.453439 1564 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 14 13:24:09.443178 systemd-networkd[1253]: lxc19d9fd7d5516: Lost carrier Apr 14 13:24:11.945976 kubelet[2702]: E0414 13:24:11.945889 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="29.173s" Apr 14 13:24:15.033156 kubelet[2702]: I0414 13:24:15.019518 2702 scope.go:117] "RemoveContainer" containerID="2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0" Apr 14 13:24:15.282812 containerd[1579]: time="2026-04-14T13:24:15.262066081Z" level=info msg="StartContainer for \"685478545ecea7e061c2288e672aca9ee4425b5979c3d84e667fe155027fecbb\" returns successfully" Apr 14 13:24:19.788118 update_engine[1564]: I20260414 13:24:19.760872 1564 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 13:24:19.868102 update_engine[1564]: I20260414 13:24:19.834975 1564 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 13:24:19.911694 update_engine[1564]: I20260414 13:24:19.908838 1564 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 13:24:20.161869 kubelet[2702]: E0414 13:24:20.137567 2702 controller.go:195] "Failed to update lease" err="Put \"https://10.0.0.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 14 13:24:20.168740 update_engine[1564]: E20260414 13:24:20.148078 1564 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 13:24:20.168740 update_engine[1564]: I20260414 13:24:20.164911 1564 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 14 13:24:22.938809 containerd[1579]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 14 13:24:23.108693 systemd[1]: run-netns-cni\x2d24aa3166\x2d802f\x2d7b09\x2d05a2\x2db31a3d0944f1.mount: Deactivated successfully. Apr 14 13:24:23.109800 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0edd7b7639877358c596b385e4db7c7f940d0beb65293622a479a9ec00535ae6-shm.mount: Deactivated successfully. Apr 14 13:24:23.516703 containerd[1579]: time="2026-04-14T13:24:23.469304995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hld84,Uid:67191550-1737-456e-a429-02a63e9c7256,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0edd7b7639877358c596b385e4db7c7f940d0beb65293622a479a9ec00535ae6\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 14 13:24:24.871291 kubelet[2702]: E0414 13:24:24.866455 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0edd7b7639877358c596b385e4db7c7f940d0beb65293622a479a9ec00535ae6\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 14 13:24:24.873134 kubelet[2702]: E0414 13:24:24.872683 2702 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0edd7b7639877358c596b385e4db7c7f940d0beb65293622a479a9ec00535ae6\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-674b8bbfcf-hld84" Apr 14 13:24:24.873134 kubelet[2702]: E0414 13:24:24.873061 2702 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0edd7b7639877358c596b385e4db7c7f940d0beb65293622a479a9ec00535ae6\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-674b8bbfcf-hld84" Apr 14 13:24:24.881877 kubelet[2702]: E0414 13:24:24.873418 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hld84_kube-system(67191550-1737-456e-a429-02a63e9c7256)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hld84_kube-system(67191550-1737-456e-a429-02a63e9c7256)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0edd7b7639877358c596b385e4db7c7f940d0beb65293622a479a9ec00535ae6\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded\"" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:24:25.676007 containerd[1579]: time="2026-04-14T13:24:25.660500779Z" level=info msg="RemoveContainer for \"2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0\"" Apr 14 13:24:26.166520 containerd[1579]: time="2026-04-14T13:24:26.158854780Z" level=info msg="RemoveContainer for \"2b1de20b330a2414306bdb2f2378a3719bc33e3e3756060fa143554786d673e0\" returns successfully" Apr 14 13:24:26.933216 kubelet[2702]: E0414 13:24:26.928802 2702 controller.go:195] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 14 13:24:28.080285 sudo[1769]: pam_unix(sudo:session): session closed for user root Apr 14 13:24:28.152400 sshd[1762]: pam_unix(sshd:session): session closed for user core Apr 14 13:24:28.820448 systemd[1]: sshd@6-10.0.0.13:22-10.0.0.1:59912.service: Deactivated successfully. Apr 14 13:24:28.979607 systemd[1]: session-7.scope: Deactivated successfully. Apr 14 13:24:29.281063 systemd-logind[1561]: Session 7 logged out. Waiting for processes to exit. Apr 14 13:24:29.671115 systemd-logind[1561]: Removed session 7. Apr 14 13:24:30.666141 kubelet[2702]: E0414 13:24:30.663281 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="18.706s" Apr 14 13:24:30.758133 update_engine[1564]: I20260414 13:24:30.757397 1564 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 13:24:30.811781 update_engine[1564]: I20260414 13:24:30.808921 1564 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 13:24:30.828124 update_engine[1564]: I20260414 13:24:30.827649 1564 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 13:24:30.885302 update_engine[1564]: E20260414 13:24:30.875191 1564 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 13:24:30.885302 update_engine[1564]: I20260414 13:24:30.883848 1564 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 14 13:24:30.939800 update_engine[1564]: I20260414 13:24:30.890010 1564 omaha_request_action.cc:617] Omaha request response: Apr 14 13:24:30.960873 update_engine[1564]: E20260414 13:24:30.956105 1564 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 14 13:24:30.960873 update_engine[1564]: I20260414 13:24:30.957696 1564 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 14 13:24:30.960873 update_engine[1564]: I20260414 13:24:30.957717 1564 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 13:24:30.960873 update_engine[1564]: I20260414 13:24:30.957721 1564 update_attempter.cc:306] Processing Done. Apr 14 13:24:30.960873 update_engine[1564]: E20260414 13:24:30.957795 1564 update_attempter.cc:619] Update failed. Apr 14 13:24:30.960873 update_engine[1564]: I20260414 13:24:30.957825 1564 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 14 13:24:30.960873 update_engine[1564]: I20260414 13:24:30.957829 1564 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 14 13:24:30.960873 update_engine[1564]: I20260414 13:24:30.957834 1564 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 14 13:24:30.960873 update_engine[1564]: I20260414 13:24:30.958429 1564 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 14 13:24:30.960873 update_engine[1564]: I20260414 13:24:30.958488 1564 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 14 13:24:30.960873 update_engine[1564]: I20260414 13:24:30.958492 1564 omaha_request_action.cc:272] Request: Apr 14 13:24:30.960873 update_engine[1564]: Apr 14 13:24:30.960873 update_engine[1564]: Apr 14 13:24:30.960873 update_engine[1564]: Apr 14 13:24:30.960873 update_engine[1564]: Apr 14 13:24:30.960873 update_engine[1564]: Apr 14 13:24:30.960873 update_engine[1564]: Apr 14 13:24:30.960873 update_engine[1564]: I20260414 13:24:30.958498 1564 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 14 13:24:30.963266 locksmithd[1616]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 14 13:24:30.968860 update_engine[1564]: I20260414 13:24:30.963773 1564 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 14 13:24:31.018665 update_engine[1564]: I20260414 13:24:31.017728 1564 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 14 13:24:31.104121 update_engine[1564]: E20260414 13:24:31.060220 1564 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 14 13:24:31.166784 update_engine[1564]: I20260414 13:24:31.166255 1564 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 14 13:24:31.166784 update_engine[1564]: I20260414 13:24:31.166795 1564 omaha_request_action.cc:617] Omaha request response: Apr 14 13:24:31.166784 update_engine[1564]: I20260414 13:24:31.166805 1564 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 13:24:31.166784 update_engine[1564]: I20260414 13:24:31.166810 1564 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 14 13:24:31.166784 update_engine[1564]: I20260414 13:24:31.166816 1564 update_attempter.cc:306] Processing Done. Apr 14 13:24:31.167432 update_engine[1564]: I20260414 13:24:31.166867 1564 update_attempter.cc:310] Error event sent. Apr 14 13:24:31.167432 update_engine[1564]: I20260414 13:24:31.167050 1564 update_check_scheduler.cc:74] Next update check in 46m56s Apr 14 13:24:31.387961 locksmithd[1616]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 14 13:24:35.235458 systemd-networkd[1253]: lxc190b3a9e200b: Link DOWN Apr 14 13:24:35.236130 systemd-networkd[1253]: lxc190b3a9e200b: Lost carrier Apr 14 13:24:44.353163 containerd[1579]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 14 13:24:44.541138 systemd[1]: run-netns-cni\x2d8375a7c2\x2de241\x2d32fb\x2d1f45\x2dde4dcdaea5ec.mount: Deactivated successfully. Apr 14 13:24:44.573902 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ae3408147d2038469e3ba1bd96bf8d3b830f89618f4c4d51cfdf15769c7c154-shm.mount: Deactivated successfully. Apr 14 13:24:45.663797 containerd[1579]: time="2026-04-14T13:24:45.663135122Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jc82b,Uid:35ec3304-c011-45a9-8315-a39d73673d17,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9ae3408147d2038469e3ba1bd96bf8d3b830f89618f4c4d51cfdf15769c7c154\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 14 13:24:54.042143 kubelet[2702]: E0414 13:24:54.041576 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ae3408147d2038469e3ba1bd96bf8d3b830f89618f4c4d51cfdf15769c7c154\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 14 13:24:55.062255 kubelet[2702]: E0414 13:24:55.062115 2702 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ae3408147d2038469e3ba1bd96bf8d3b830f89618f4c4d51cfdf15769c7c154\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-674b8bbfcf-jc82b" Apr 14 13:24:55.062255 kubelet[2702]: E0414 13:24:55.062242 2702 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ae3408147d2038469e3ba1bd96bf8d3b830f89618f4c4d51cfdf15769c7c154\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-674b8bbfcf-jc82b" Apr 14 13:24:55.063128 kubelet[2702]: E0414 13:24:55.062473 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jc82b_kube-system(35ec3304-c011-45a9-8315-a39d73673d17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jc82b_kube-system(35ec3304-c011-45a9-8315-a39d73673d17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ae3408147d2038469e3ba1bd96bf8d3b830f89618f4c4d51cfdf15769c7c154\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded\"" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:24:55.931748 kubelet[2702]: E0414 13:24:55.928882 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="25.265s" Apr 14 13:24:56.449442 kubelet[2702]: E0414 13:24:56.442310 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:56.460178 kubelet[2702]: E0414 13:24:56.447101 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:56.742870 kubelet[2702]: E0414 13:24:56.742059 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:56.952460 kubelet[2702]: E0414 13:24:56.952203 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:56.969690 kubelet[2702]: I0414 13:24:56.953209 2702 scope.go:117] "RemoveContainer" containerID="d968e5901e139a31f5fe2e10bab05bfc1b7422ea305eb7aca01f64228d15b0f5" Apr 14 13:24:57.380117 kubelet[2702]: E0414 13:24:57.379945 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:24:58.040162 kubelet[2702]: E0414 13:24:58.030382 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.093s" Apr 14 13:24:58.326791 containerd[1579]: time="2026-04-14T13:24:58.325824317Z" level=info msg="CreateContainer within sandbox \"6c1daf096b35f89128a39c02ce2390d78cf7315ee1293c9feb081f3460391859\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 14 13:24:59.169303 containerd[1579]: time="2026-04-14T13:24:59.163498105Z" level=info msg="CreateContainer within sandbox \"6c1daf096b35f89128a39c02ce2390d78cf7315ee1293c9feb081f3460391859\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"1159bcd1e246f5de59f0f11b2084fb0c44e703aa040661679399c46fec5f3480\"" Apr 14 13:24:59.541996 containerd[1579]: time="2026-04-14T13:24:59.536489424Z" level=info msg="StartContainer for \"1159bcd1e246f5de59f0f11b2084fb0c44e703aa040661679399c46fec5f3480\"" Apr 14 13:24:59.858199 kubelet[2702]: E0414 13:24:59.842968 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.802s" Apr 14 13:25:00.453980 kubelet[2702]: E0414 13:25:00.452371 2702 manager.go:1116] Failed to create existing container: /kubepods/besteffort/podd09ec03a-4296-44dc-b569-d7f061ca22b0/67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443: task 67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443 not found Apr 14 13:25:00.495626 containerd[1579]: time="2026-04-14T13:25:00.495290766Z" level=info msg="StopPodSandbox for \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\"" Apr 14 13:25:00.689424 kubelet[2702]: E0414 13:25:00.689126 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:00.965626 kubelet[2702]: E0414 13:25:00.965575 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:01.114590 kubelet[2702]: E0414 13:25:01.105436 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:01.114744 containerd[1579]: time="2026-04-14T13:25:01.112854451Z" level=info msg="StopPodSandbox for \"103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5\"" Apr 14 13:25:01.339862 kubelet[2702]: E0414 13:25:01.336908 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.234s" Apr 14 13:25:01.757935 containerd[1579]: time="2026-04-14T13:25:01.757699634Z" level=info msg="StartContainer for \"1159bcd1e246f5de59f0f11b2084fb0c44e703aa040661679399c46fec5f3480\" returns successfully" Apr 14 13:25:02.251910 containerd[1579]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 14 13:25:02.251910 containerd[1579]: level=warning msg="Unable to enter namespace \"\", will not delete interface" error="failed to Statfs \"\": no such file or directory" subsys=cilium-cni Apr 14 13:25:02.274016 containerd[1579]: time="2026-04-14T13:25:02.271211461Z" level=info msg="TearDown network for sandbox \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\" successfully" Apr 14 13:25:02.369793 containerd[1579]: time="2026-04-14T13:25:02.366047353Z" level=info msg="StopPodSandbox for \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\" returns successfully" Apr 14 13:25:02.629838 kubelet[2702]: E0414 13:25:02.616740 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:02.655290 containerd[1579]: time="2026-04-14T13:25:02.651152970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jc82b,Uid:35ec3304-c011-45a9-8315-a39d73673d17,Namespace:kube-system,Attempt:1,}" Apr 14 13:25:02.846383 containerd[1579]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 14 13:25:02.846383 containerd[1579]: level=warning msg="Unable to enter namespace \"\", will not delete interface" error="failed to Statfs \"\": no such file or directory" subsys=cilium-cni Apr 14 13:25:02.846383 containerd[1579]: time="2026-04-14T13:25:02.846108575Z" level=info msg="TearDown network for sandbox \"103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5\" successfully" Apr 14 13:25:02.846383 containerd[1579]: time="2026-04-14T13:25:02.846189649Z" level=info msg="StopPodSandbox for \"103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5\" returns successfully" Apr 14 13:25:03.066129 kubelet[2702]: E0414 13:25:03.063345 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:03.271526 containerd[1579]: time="2026-04-14T13:25:03.270722546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hld84,Uid:67191550-1737-456e-a429-02a63e9c7256,Namespace:kube-system,Attempt:1,}" Apr 14 13:25:04.680185 systemd-networkd[1253]: lxc7e8dffbb65af: Link UP Apr 14 13:25:04.734052 kernel: eth0: renamed from tmpba69c Apr 14 13:25:04.758142 systemd-networkd[1253]: lxc7e8dffbb65af: Gained carrier Apr 14 13:25:04.811810 systemd-networkd[1253]: lxcea456c2680e6: Link UP Apr 14 13:25:04.836489 kernel: eth0: renamed from tmp0f645 Apr 14 13:25:04.896143 kubelet[2702]: E0414 13:25:04.893487 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:04.896143 kubelet[2702]: E0414 13:25:04.930932 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:04.896143 kubelet[2702]: E0414 13:25:04.931252 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:05.029446 systemd-networkd[1253]: lxcea456c2680e6: Gained carrier Apr 14 13:25:06.236062 systemd-networkd[1253]: lxcea456c2680e6: Gained IPv6LL Apr 14 13:25:06.591438 systemd-networkd[1253]: lxc7e8dffbb65af: Gained IPv6LL Apr 14 13:25:07.330068 kubelet[2702]: E0414 13:25:07.329685 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:07.465613 kubelet[2702]: E0414 13:25:07.463899 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:08.147309 kubelet[2702]: E0414 13:25:08.144131 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:09.489789 kubelet[2702]: E0414 13:25:09.486500 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:14.063894 kubelet[2702]: E0414 13:25:14.063866 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:14.648224 kubelet[2702]: E0414 13:25:14.648157 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:25:41.157144 kubelet[2702]: E0414 13:25:41.156679 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.21s" Apr 14 13:25:43.443803 systemd[1]: Started sshd@7-10.0.0.13:22-10.0.0.1:33338.service - OpenSSH per-connection server daemon (10.0.0.1:33338). Apr 14 13:25:44.291950 sshd[4683]: Accepted publickey for core from 10.0.0.1 port 33338 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:25:44.326082 sshd[4683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:25:44.504078 systemd-logind[1561]: New session 8 of user core. Apr 14 13:25:44.518955 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 14 13:25:47.191784 sshd[4683]: pam_unix(sshd:session): session closed for user core Apr 14 13:25:47.530662 systemd[1]: sshd@7-10.0.0.13:22-10.0.0.1:33338.service: Deactivated successfully. Apr 14 13:25:47.863660 systemd[1]: session-8.scope: Deactivated successfully. Apr 14 13:25:48.084870 systemd-logind[1561]: Session 8 logged out. Waiting for processes to exit. Apr 14 13:25:48.457465 systemd-logind[1561]: Removed session 8. Apr 14 13:25:52.300454 systemd[1]: Started sshd@8-10.0.0.13:22-10.0.0.1:48118.service - OpenSSH per-connection server daemon (10.0.0.1:48118). Apr 14 13:25:52.722385 sshd[4705]: Accepted publickey for core from 10.0.0.1 port 48118 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:25:52.740125 sshd[4705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:25:53.308303 systemd-logind[1561]: New session 9 of user core. Apr 14 13:25:53.372359 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 14 13:25:57.742514 sshd[4705]: pam_unix(sshd:session): session closed for user core Apr 14 13:25:57.965927 systemd[1]: sshd@8-10.0.0.13:22-10.0.0.1:48118.service: Deactivated successfully. Apr 14 13:25:58.147959 systemd[1]: session-9.scope: Deactivated successfully. Apr 14 13:25:58.247383 systemd-logind[1561]: Session 9 logged out. Waiting for processes to exit. Apr 14 13:25:58.443031 systemd-logind[1561]: Removed session 9. Apr 14 13:25:58.634773 kubelet[2702]: E0414 13:25:58.574455 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:26:03.024104 systemd[1]: Started sshd@9-10.0.0.13:22-10.0.0.1:51140.service - OpenSSH per-connection server daemon (10.0.0.1:51140). Apr 14 13:26:03.525909 sshd[4722]: Accepted publickey for core from 10.0.0.1 port 51140 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:26:03.527338 sshd[4722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:26:03.576508 systemd-logind[1561]: New session 10 of user core. Apr 14 13:26:03.712817 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 14 13:26:04.555220 sshd[4722]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:04.594599 systemd[1]: sshd@9-10.0.0.13:22-10.0.0.1:51140.service: Deactivated successfully. Apr 14 13:26:04.694803 systemd-logind[1561]: Session 10 logged out. Waiting for processes to exit. Apr 14 13:26:04.701895 systemd[1]: session-10.scope: Deactivated successfully. Apr 14 13:26:04.715873 systemd-logind[1561]: Removed session 10. Apr 14 13:26:09.560832 systemd[1]: Started sshd@10-10.0.0.13:22-10.0.0.1:48680.service - OpenSSH per-connection server daemon (10.0.0.1:48680). Apr 14 13:26:10.248111 kubelet[2702]: E0414 13:26:10.246914 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:26:10.774262 sshd[4741]: Accepted publickey for core from 10.0.0.1 port 48680 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:26:10.807856 sshd[4741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:26:11.351272 systemd-logind[1561]: New session 11 of user core. Apr 14 13:26:11.383928 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 14 13:26:12.448363 sshd[4741]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:12.451855 systemd-logind[1561]: Session 11 logged out. Waiting for processes to exit. Apr 14 13:26:12.451991 systemd[1]: sshd@10-10.0.0.13:22-10.0.0.1:48680.service: Deactivated successfully. Apr 14 13:26:12.471180 systemd[1]: session-11.scope: Deactivated successfully. Apr 14 13:26:12.473096 systemd-logind[1561]: Removed session 11. Apr 14 13:26:13.036746 kubelet[2702]: E0414 13:26:13.036690 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:26:17.562054 systemd[1]: Started sshd@11-10.0.0.13:22-10.0.0.1:48690.service - OpenSSH per-connection server daemon (10.0.0.1:48690). Apr 14 13:26:18.176947 sshd[4760]: Accepted publickey for core from 10.0.0.1 port 48690 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:26:18.210929 sshd[4760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:26:18.225342 systemd-logind[1561]: New session 12 of user core. Apr 14 13:26:18.228850 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 14 13:26:20.414609 sshd[4760]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:20.424808 systemd[1]: Started sshd@12-10.0.0.13:22-10.0.0.1:46230.service - OpenSSH per-connection server daemon (10.0.0.1:46230). Apr 14 13:26:20.425157 systemd[1]: sshd@11-10.0.0.13:22-10.0.0.1:48690.service: Deactivated successfully. Apr 14 13:26:20.439967 systemd-logind[1561]: Session 12 logged out. Waiting for processes to exit. Apr 14 13:26:20.440034 systemd[1]: session-12.scope: Deactivated successfully. Apr 14 13:26:20.442277 systemd-logind[1561]: Removed session 12. Apr 14 13:26:20.494098 sshd[4773]: Accepted publickey for core from 10.0.0.1 port 46230 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:26:20.499292 sshd[4773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:26:20.553834 systemd-logind[1561]: New session 13 of user core. Apr 14 13:26:20.568007 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 14 13:26:21.815369 systemd-networkd[1253]: lxc_health: Link DOWN Apr 14 13:26:21.815378 systemd-networkd[1253]: lxc_health: Lost carrier Apr 14 13:26:21.956032 systemd-networkd[1253]: lxc_health: Link UP Apr 14 13:26:21.973190 systemd-networkd[1253]: lxc_health: Gained carrier Apr 14 13:26:22.562434 sshd[4773]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:22.753170 systemd[1]: sshd@12-10.0.0.13:22-10.0.0.1:46230.service: Deactivated successfully. Apr 14 13:26:22.940422 systemd[1]: session-13.scope: Deactivated successfully. Apr 14 13:26:22.975215 systemd-logind[1561]: Session 13 logged out. Waiting for processes to exit. Apr 14 13:26:23.256123 systemd[1]: Started sshd@13-10.0.0.13:22-10.0.0.1:46232.service - OpenSSH per-connection server daemon (10.0.0.1:46232). Apr 14 13:26:23.553474 systemd-logind[1561]: Removed session 13. Apr 14 13:26:23.686756 systemd-networkd[1253]: lxc_health: Gained IPv6LL Apr 14 13:26:25.078414 sshd[4816]: Accepted publickey for core from 10.0.0.1 port 46232 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:26:25.102791 sshd[4816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:26:25.288335 kubelet[2702]: E0414 13:26:25.284293 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.204s" Apr 14 13:26:25.449214 systemd-logind[1561]: New session 14 of user core. Apr 14 13:26:25.454400 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 14 13:26:26.763695 sshd[4816]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:26.853797 systemd[1]: sshd@13-10.0.0.13:22-10.0.0.1:46232.service: Deactivated successfully. Apr 14 13:26:26.882263 systemd[1]: session-14.scope: Deactivated successfully. Apr 14 13:26:26.891306 systemd-logind[1561]: Session 14 logged out. Waiting for processes to exit. Apr 14 13:26:26.972604 kubelet[2702]: E0414 13:26:26.972439 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:26:26.973017 systemd-logind[1561]: Removed session 14. Apr 14 13:26:29.945442 kubelet[2702]: E0414 13:26:29.945307 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:26:31.815652 systemd[1]: Started sshd@14-10.0.0.13:22-10.0.0.1:37176.service - OpenSSH per-connection server daemon (10.0.0.1:37176). Apr 14 13:26:32.041618 sshd[4834]: Accepted publickey for core from 10.0.0.1 port 37176 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:26:32.043840 sshd[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:26:32.089938 systemd-logind[1561]: New session 15 of user core. Apr 14 13:26:32.096588 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 14 13:26:33.392667 sshd[4834]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:33.495589 systemd[1]: sshd@14-10.0.0.13:22-10.0.0.1:37176.service: Deactivated successfully. Apr 14 13:26:33.507510 systemd[1]: session-15.scope: Deactivated successfully. Apr 14 13:26:33.535777 systemd-logind[1561]: Session 15 logged out. Waiting for processes to exit. Apr 14 13:26:33.543076 systemd-logind[1561]: Removed session 15. Apr 14 13:26:34.855519 systemd-networkd[1253]: lxc7e8dffbb65af: Link DOWN Apr 14 13:26:34.855528 systemd-networkd[1253]: lxc7e8dffbb65af: Lost carrier Apr 14 13:26:35.366330 containerd[1579]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 14 13:26:35.384023 systemd[1]: run-netns-cni\x2d4739bfd2\x2db4ba\x2d87e8\x2d734c\x2d4395ab8be2e1.mount: Deactivated successfully. Apr 14 13:26:35.384764 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba69cb3078b2f4362228a567c3fd9c201b8a15991c9145fe71f91d906fa40e82-shm.mount: Deactivated successfully. Apr 14 13:26:35.384929 containerd[1579]: time="2026-04-14T13:26:35.384162598Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hld84,Uid:67191550-1737-456e-a429-02a63e9c7256,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ba69cb3078b2f4362228a567c3fd9c201b8a15991c9145fe71f91d906fa40e82\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 14 13:26:35.423506 kubelet[2702]: E0414 13:26:35.422260 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba69cb3078b2f4362228a567c3fd9c201b8a15991c9145fe71f91d906fa40e82\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 14 13:26:35.423506 kubelet[2702]: E0414 13:26:35.422517 2702 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba69cb3078b2f4362228a567c3fd9c201b8a15991c9145fe71f91d906fa40e82\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-674b8bbfcf-hld84" Apr 14 13:26:35.423506 kubelet[2702]: E0414 13:26:35.422584 2702 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba69cb3078b2f4362228a567c3fd9c201b8a15991c9145fe71f91d906fa40e82\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-674b8bbfcf-hld84" Apr 14 13:26:35.423506 kubelet[2702]: E0414 13:26:35.422758 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hld84_kube-system(67191550-1737-456e-a429-02a63e9c7256)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hld84_kube-system(67191550-1737-456e-a429-02a63e9c7256)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba69cb3078b2f4362228a567c3fd9c201b8a15991c9145fe71f91d906fa40e82\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded\"" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:26:35.741250 systemd-networkd[1253]: lxcea456c2680e6: Link DOWN Apr 14 13:26:35.741649 systemd-networkd[1253]: lxcea456c2680e6: Lost carrier Apr 14 13:26:35.929404 containerd[1579]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 14 13:26:35.943505 systemd[1]: run-netns-cni\x2d433f098b\x2d145b\x2d8d5f\x2da896\x2df88bc5ef4299.mount: Deactivated successfully. Apr 14 13:26:35.966784 containerd[1579]: time="2026-04-14T13:26:35.964195543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jc82b,Uid:35ec3304-c011-45a9-8315-a39d73673d17,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0f645cb15b4270d9df6f23a4dc290ac8d1cc98ae30169590d5d0f296051fc42c\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 14 13:26:35.972369 kubelet[2702]: E0414 13:26:35.972006 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f645cb15b4270d9df6f23a4dc290ac8d1cc98ae30169590d5d0f296051fc42c\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 14 13:26:35.979650 kubelet[2702]: E0414 13:26:35.972930 2702 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f645cb15b4270d9df6f23a4dc290ac8d1cc98ae30169590d5d0f296051fc42c\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-674b8bbfcf-jc82b" Apr 14 13:26:35.979650 kubelet[2702]: E0414 13:26:35.972957 2702 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f645cb15b4270d9df6f23a4dc290ac8d1cc98ae30169590d5d0f296051fc42c\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-674b8bbfcf-jc82b" Apr 14 13:26:35.979650 kubelet[2702]: E0414 13:26:35.973134 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jc82b_kube-system(35ec3304-c011-45a9-8315-a39d73673d17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jc82b_kube-system(35ec3304-c011-45a9-8315-a39d73673d17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f645cb15b4270d9df6f23a4dc290ac8d1cc98ae30169590d5d0f296051fc42c\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded\"" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:26:35.973597 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0f645cb15b4270d9df6f23a4dc290ac8d1cc98ae30169590d5d0f296051fc42c-shm.mount: Deactivated successfully. Apr 14 13:26:36.343695 containerd[1579]: time="2026-04-14T13:26:36.343573882Z" level=info msg="StopPodSandbox for \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\"" Apr 14 13:26:36.558508 containerd[1579]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 14 13:26:36.558508 containerd[1579]: level=warning msg="Unable to enter namespace \"\", will not delete interface" error="failed to Statfs \"\": no such file or directory" subsys=cilium-cni Apr 14 13:26:36.559479 containerd[1579]: time="2026-04-14T13:26:36.558754183Z" level=info msg="TearDown network for sandbox \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\" successfully" Apr 14 13:26:36.559479 containerd[1579]: time="2026-04-14T13:26:36.558826573Z" level=info msg="StopPodSandbox for \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\" returns successfully" Apr 14 13:26:36.561072 kubelet[2702]: E0414 13:26:36.560792 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:26:36.563200 containerd[1579]: time="2026-04-14T13:26:36.563132761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jc82b,Uid:35ec3304-c011-45a9-8315-a39d73673d17,Namespace:kube-system,Attempt:1,}" Apr 14 13:26:37.042041 systemd-networkd[1253]: lxc23501c599eac: Link UP Apr 14 13:26:37.061814 kernel: eth0: renamed from tmpbcc29 Apr 14 13:26:37.073301 systemd-networkd[1253]: lxc23501c599eac: Gained carrier Apr 14 13:26:38.551274 systemd[1]: Started sshd@15-10.0.0.13:22-10.0.0.1:37186.service - OpenSSH per-connection server daemon (10.0.0.1:37186). Apr 14 13:26:38.662656 sshd[4922]: Accepted publickey for core from 10.0.0.1 port 37186 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:26:38.673772 sshd[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:26:38.760965 systemd-logind[1561]: New session 16 of user core. Apr 14 13:26:38.768510 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 14 13:26:38.982794 systemd-networkd[1253]: lxc23501c599eac: Gained IPv6LL Apr 14 13:26:39.041626 kubelet[2702]: E0414 13:26:39.039842 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:26:39.907150 sshd[4922]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:39.940322 systemd[1]: sshd@15-10.0.0.13:22-10.0.0.1:37186.service: Deactivated successfully. Apr 14 13:26:39.959992 systemd[1]: session-16.scope: Deactivated successfully. Apr 14 13:26:39.973700 systemd-logind[1561]: Session 16 logged out. Waiting for processes to exit. Apr 14 13:26:39.994302 systemd-logind[1561]: Removed session 16. Apr 14 13:26:44.968790 systemd[1]: Started sshd@16-10.0.0.13:22-10.0.0.1:43078.service - OpenSSH per-connection server daemon (10.0.0.1:43078). Apr 14 13:26:45.219410 sshd[4937]: Accepted publickey for core from 10.0.0.1 port 43078 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:26:45.227720 sshd[4937]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:26:45.357748 systemd-logind[1561]: New session 17 of user core. Apr 14 13:26:45.414017 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 14 13:26:48.355012 sshd[4937]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:48.386764 systemd[1]: sshd@16-10.0.0.13:22-10.0.0.1:43078.service: Deactivated successfully. Apr 14 13:26:48.445688 systemd[1]: session-17.scope: Deactivated successfully. Apr 14 13:26:48.445769 systemd-logind[1561]: Session 17 logged out. Waiting for processes to exit. Apr 14 13:26:48.449765 systemd-logind[1561]: Removed session 17. Apr 14 13:26:49.038451 containerd[1579]: time="2026-04-14T13:26:49.038259478Z" level=info msg="StopPodSandbox for \"103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5\"" Apr 14 13:26:49.820563 containerd[1579]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 14 13:26:49.820563 containerd[1579]: level=warning msg="Unable to enter namespace \"\", will not delete interface" error="failed to Statfs \"\": no such file or directory" subsys=cilium-cni Apr 14 13:26:49.820563 containerd[1579]: time="2026-04-14T13:26:49.820605452Z" level=info msg="TearDown network for sandbox \"103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5\" successfully" Apr 14 13:26:49.820563 containerd[1579]: time="2026-04-14T13:26:49.820628129Z" level=info msg="StopPodSandbox for \"103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5\" returns successfully" Apr 14 13:26:49.821747 kubelet[2702]: E0414 13:26:49.821720 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:26:49.822305 containerd[1579]: time="2026-04-14T13:26:49.822263463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hld84,Uid:67191550-1737-456e-a429-02a63e9c7256,Namespace:kube-system,Attempt:1,}" Apr 14 13:26:50.461932 systemd-networkd[1253]: lxc4c32fe9e2552: Link UP Apr 14 13:26:50.612851 kernel: eth0: renamed from tmp4512c Apr 14 13:26:50.629376 systemd-networkd[1253]: lxc4c32fe9e2552: Gained carrier Apr 14 13:26:52.346191 systemd-networkd[1253]: lxc4c32fe9e2552: Gained IPv6LL Apr 14 13:26:53.373347 systemd[1]: Started sshd@17-10.0.0.13:22-10.0.0.1:54876.service - OpenSSH per-connection server daemon (10.0.0.1:54876). Apr 14 13:26:53.562756 sshd[4993]: Accepted publickey for core from 10.0.0.1 port 54876 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:26:53.564360 sshd[4993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:26:53.585450 systemd-logind[1561]: New session 18 of user core. Apr 14 13:26:53.600803 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 14 13:26:54.406601 sshd[4993]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:54.416293 systemd[1]: Started sshd@18-10.0.0.13:22-10.0.0.1:54892.service - OpenSSH per-connection server daemon (10.0.0.1:54892). Apr 14 13:26:54.423195 systemd[1]: sshd@17-10.0.0.13:22-10.0.0.1:54876.service: Deactivated successfully. Apr 14 13:26:54.433204 systemd[1]: session-18.scope: Deactivated successfully. Apr 14 13:26:54.441689 systemd-logind[1561]: Session 18 logged out. Waiting for processes to exit. Apr 14 13:26:54.450202 systemd-logind[1561]: Removed session 18. Apr 14 13:26:54.499993 sshd[5009]: Accepted publickey for core from 10.0.0.1 port 54892 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:26:54.502618 sshd[5009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:26:54.509458 systemd-logind[1561]: New session 19 of user core. Apr 14 13:26:54.524348 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 14 13:26:55.340431 sshd[5009]: pam_unix(sshd:session): session closed for user core Apr 14 13:26:55.356567 systemd[1]: Started sshd@19-10.0.0.13:22-10.0.0.1:54898.service - OpenSSH per-connection server daemon (10.0.0.1:54898). Apr 14 13:26:55.358035 systemd[1]: sshd@18-10.0.0.13:22-10.0.0.1:54892.service: Deactivated successfully. Apr 14 13:26:55.376038 systemd[1]: session-19.scope: Deactivated successfully. Apr 14 13:26:55.378872 systemd-logind[1561]: Session 19 logged out. Waiting for processes to exit. Apr 14 13:26:55.381361 systemd-logind[1561]: Removed session 19. Apr 14 13:26:55.420628 sshd[5023]: Accepted publickey for core from 10.0.0.1 port 54898 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:26:55.436407 sshd[5023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:26:55.468517 systemd-logind[1561]: New session 20 of user core. Apr 14 13:26:55.481641 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 14 13:27:01.193386 sshd[5023]: pam_unix(sshd:session): session closed for user core Apr 14 13:27:01.222603 systemd[1]: Started sshd@20-10.0.0.13:22-10.0.0.1:55476.service - OpenSSH per-connection server daemon (10.0.0.1:55476). Apr 14 13:27:01.255832 systemd[1]: sshd@19-10.0.0.13:22-10.0.0.1:54898.service: Deactivated successfully. Apr 14 13:27:01.295287 systemd[1]: session-20.scope: Deactivated successfully. Apr 14 13:27:01.416460 systemd-logind[1561]: Session 20 logged out. Waiting for processes to exit. Apr 14 13:27:01.422853 systemd-logind[1561]: Removed session 20. Apr 14 13:27:01.557774 sshd[5043]: Accepted publickey for core from 10.0.0.1 port 55476 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:27:01.607447 sshd[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:27:01.927167 systemd-logind[1561]: New session 21 of user core. Apr 14 13:27:01.941026 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 14 13:27:08.438385 sshd[5043]: pam_unix(sshd:session): session closed for user core Apr 14 13:27:08.532698 systemd[1]: Started sshd@21-10.0.0.13:22-10.0.0.1:55488.service - OpenSSH per-connection server daemon (10.0.0.1:55488). Apr 14 13:27:08.953110 systemd[1]: sshd@20-10.0.0.13:22-10.0.0.1:55476.service: Deactivated successfully. Apr 14 13:27:09.161411 systemd[1]: session-21.scope: Deactivated successfully. Apr 14 13:27:09.195290 systemd-logind[1561]: Session 21 logged out. Waiting for processes to exit. Apr 14 13:27:09.349164 systemd-logind[1561]: Removed session 21. Apr 14 13:27:09.618229 kubelet[2702]: E0414 13:27:09.617808 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.446s" Apr 14 13:27:11.134658 sshd[5062]: Accepted publickey for core from 10.0.0.1 port 55488 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:27:11.221586 kubelet[2702]: E0414 13:27:11.216517 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:27:11.221161 sshd[5062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:27:11.874901 systemd-logind[1561]: New session 22 of user core. Apr 14 13:27:11.995570 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 14 13:27:13.736934 kubelet[2702]: E0414 13:27:13.736819 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.773s" Apr 14 13:27:15.321649 kubelet[2702]: E0414 13:27:15.321413 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.584s" Apr 14 13:27:17.252896 kubelet[2702]: E0414 13:27:17.240302 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:27:17.252896 kubelet[2702]: E0414 13:27:17.250619 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.927s" Apr 14 13:27:19.889764 kubelet[2702]: E0414 13:27:19.882136 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.63s" Apr 14 13:27:20.933058 sshd[5062]: pam_unix(sshd:session): session closed for user core Apr 14 13:27:21.017233 kubelet[2702]: E0414 13:27:20.946234 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.057s" Apr 14 13:27:21.287453 systemd[1]: sshd@21-10.0.0.13:22-10.0.0.1:55488.service: Deactivated successfully. Apr 14 13:27:21.441129 systemd[1]: session-22.scope: Deactivated successfully. Apr 14 13:27:21.531214 systemd-logind[1561]: Session 22 logged out. Waiting for processes to exit. Apr 14 13:27:21.852883 systemd-logind[1561]: Removed session 22. Apr 14 13:27:26.043600 systemd[1]: Started sshd@22-10.0.0.13:22-10.0.0.1:54494.service - OpenSSH per-connection server daemon (10.0.0.1:54494). Apr 14 13:27:26.596485 sshd[5085]: Accepted publickey for core from 10.0.0.1 port 54494 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:27:26.615650 sshd[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:27:27.032731 systemd-logind[1561]: New session 23 of user core. Apr 14 13:27:27.056436 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 14 13:27:32.237667 sshd[5085]: pam_unix(sshd:session): session closed for user core Apr 14 13:27:32.354343 systemd[1]: sshd@22-10.0.0.13:22-10.0.0.1:54494.service: Deactivated successfully. Apr 14 13:27:32.470749 systemd[1]: session-23.scope: Deactivated successfully. Apr 14 13:27:32.515585 systemd-logind[1561]: Session 23 logged out. Waiting for processes to exit. Apr 14 13:27:32.551602 systemd-logind[1561]: Removed session 23. Apr 14 13:27:36.067489 kubelet[2702]: E0414 13:27:36.066582 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:27:37.141935 kubelet[2702]: E0414 13:27:37.141104 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:27:37.563882 systemd[1]: Started sshd@23-10.0.0.13:22-10.0.0.1:57784.service - OpenSSH per-connection server daemon (10.0.0.1:57784). Apr 14 13:27:38.526595 sshd[5106]: Accepted publickey for core from 10.0.0.1 port 57784 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:27:38.539337 sshd[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:27:38.931649 systemd-logind[1561]: New session 24 of user core. Apr 14 13:27:38.932217 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 14 13:27:42.159508 sshd[5106]: pam_unix(sshd:session): session closed for user core Apr 14 13:27:42.224620 systemd[1]: sshd@23-10.0.0.13:22-10.0.0.1:57784.service: Deactivated successfully. Apr 14 13:27:42.390282 systemd[1]: session-24.scope: Deactivated successfully. Apr 14 13:27:42.439326 systemd-logind[1561]: Session 24 logged out. Waiting for processes to exit. Apr 14 13:27:42.582915 systemd-logind[1561]: Removed session 24. Apr 14 13:27:43.983333 kubelet[2702]: E0414 13:27:43.983238 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:27:46.470781 kubelet[2702]: E0414 13:27:46.468937 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:27:47.183038 systemd[1]: Started sshd@24-10.0.0.13:22-10.0.0.1:41934.service - OpenSSH per-connection server daemon (10.0.0.1:41934). Apr 14 13:27:48.610475 sshd[5123]: Accepted publickey for core from 10.0.0.1 port 41934 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:27:48.609053 sshd[5123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:27:48.826471 systemd-logind[1561]: New session 25 of user core. Apr 14 13:27:48.952758 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 14 13:27:49.537244 kubelet[2702]: E0414 13:27:49.537108 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.316s" Apr 14 13:27:50.829107 sshd[5123]: pam_unix(sshd:session): session closed for user core Apr 14 13:27:50.867820 systemd[1]: sshd@24-10.0.0.13:22-10.0.0.1:41934.service: Deactivated successfully. Apr 14 13:27:50.883105 systemd[1]: session-25.scope: Deactivated successfully. Apr 14 13:27:50.901679 systemd-logind[1561]: Session 25 logged out. Waiting for processes to exit. Apr 14 13:27:50.957802 systemd-logind[1561]: Removed session 25. Apr 14 13:27:55.912871 systemd[1]: Started sshd@25-10.0.0.13:22-10.0.0.1:44836.service - OpenSSH per-connection server daemon (10.0.0.1:44836). Apr 14 13:27:56.258240 sshd[5141]: Accepted publickey for core from 10.0.0.1 port 44836 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:27:56.274405 sshd[5141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:27:56.520988 systemd-logind[1561]: New session 26 of user core. Apr 14 13:27:56.576261 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 14 13:27:59.248213 sshd[5141]: pam_unix(sshd:session): session closed for user core Apr 14 13:27:59.387194 systemd[1]: sshd@25-10.0.0.13:22-10.0.0.1:44836.service: Deactivated successfully. Apr 14 13:27:59.406475 systemd[1]: session-26.scope: Deactivated successfully. Apr 14 13:27:59.559968 systemd-logind[1561]: Session 26 logged out. Waiting for processes to exit. Apr 14 13:27:59.600383 systemd-logind[1561]: Removed session 26. Apr 14 13:28:04.440782 systemd[1]: Started sshd@26-10.0.0.13:22-10.0.0.1:60278.service - OpenSSH per-connection server daemon (10.0.0.1:60278). Apr 14 13:28:04.846622 sshd[5157]: Accepted publickey for core from 10.0.0.1 port 60278 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:28:04.864119 sshd[5157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:28:04.901327 systemd-logind[1561]: New session 27 of user core. Apr 14 13:28:04.991558 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 14 13:28:05.993297 sshd[5157]: pam_unix(sshd:session): session closed for user core Apr 14 13:28:06.071819 systemd[1]: Started sshd@27-10.0.0.13:22-10.0.0.1:60284.service - OpenSSH per-connection server daemon (10.0.0.1:60284). Apr 14 13:28:06.077380 systemd[1]: sshd@26-10.0.0.13:22-10.0.0.1:60278.service: Deactivated successfully. Apr 14 13:28:06.083420 systemd[1]: session-27.scope: Deactivated successfully. Apr 14 13:28:06.105393 systemd-logind[1561]: Session 27 logged out. Waiting for processes to exit. Apr 14 13:28:06.107013 systemd-logind[1561]: Removed session 27. Apr 14 13:28:06.135088 sshd[5169]: Accepted publickey for core from 10.0.0.1 port 60284 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:28:06.147102 sshd[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:28:06.164125 systemd-logind[1561]: New session 28 of user core. Apr 14 13:28:06.181735 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 14 13:28:07.194728 systemd-networkd[1253]: lxc23501c599eac: Link DOWN Apr 14 13:28:07.194735 systemd-networkd[1253]: lxc23501c599eac: Lost carrier Apr 14 13:28:07.471764 containerd[1579]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 14 13:28:07.507148 systemd[1]: run-netns-cni\x2d2b0715cc\x2d0c2d\x2dccf9\x2d0058\x2d09c41d713b0b.mount: Deactivated successfully. Apr 14 13:28:07.526834 containerd[1579]: time="2026-04-14T13:28:07.526567027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jc82b,Uid:35ec3304-c011-45a9-8315-a39d73673d17,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"bcc29808c33dd7cd080324adcede8a5d57dd83be14158224a305fa1d9fc4467e\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 14 13:28:07.527259 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bcc29808c33dd7cd080324adcede8a5d57dd83be14158224a305fa1d9fc4467e-shm.mount: Deactivated successfully. Apr 14 13:28:07.529672 kubelet[2702]: E0414 13:28:07.528918 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcc29808c33dd7cd080324adcede8a5d57dd83be14158224a305fa1d9fc4467e\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 14 13:28:07.529672 kubelet[2702]: E0414 13:28:07.529091 2702 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcc29808c33dd7cd080324adcede8a5d57dd83be14158224a305fa1d9fc4467e\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-674b8bbfcf-jc82b" Apr 14 13:28:07.529672 kubelet[2702]: E0414 13:28:07.529114 2702 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bcc29808c33dd7cd080324adcede8a5d57dd83be14158224a305fa1d9fc4467e\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-674b8bbfcf-jc82b" Apr 14 13:28:07.529672 kubelet[2702]: E0414 13:28:07.529298 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jc82b_kube-system(35ec3304-c011-45a9-8315-a39d73673d17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jc82b_kube-system(35ec3304-c011-45a9-8315-a39d73673d17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bcc29808c33dd7cd080324adcede8a5d57dd83be14158224a305fa1d9fc4467e\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded\"" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:28:08.731237 containerd[1579]: time="2026-04-14T13:28:08.728300755Z" level=info msg="StopPodSandbox for \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\"" Apr 14 13:28:08.947999 containerd[1579]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Apr 14 13:28:08.947999 containerd[1579]: level=warning msg="Unable to enter namespace \"\", will not delete interface" error="failed to Statfs \"\": no such file or directory" subsys=cilium-cni Apr 14 13:28:08.948995 containerd[1579]: time="2026-04-14T13:28:08.948182813Z" level=info msg="TearDown network for sandbox \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\" successfully" Apr 14 13:28:08.948995 containerd[1579]: time="2026-04-14T13:28:08.948302427Z" level=info msg="StopPodSandbox for \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\" returns successfully" Apr 14 13:28:09.084826 kubelet[2702]: E0414 13:28:09.077689 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:28:09.135784 containerd[1579]: time="2026-04-14T13:28:09.135589647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jc82b,Uid:35ec3304-c011-45a9-8315-a39d73673d17,Namespace:kube-system,Attempt:1,}" Apr 14 13:28:09.537625 systemd-networkd[1253]: lxc66f836e2314b: Link UP Apr 14 13:28:09.550631 kernel: eth0: renamed from tmp39122 Apr 14 13:28:09.575235 systemd-networkd[1253]: lxc66f836e2314b: Gained carrier Apr 14 13:28:11.075456 systemd-networkd[1253]: lxc66f836e2314b: Gained IPv6LL Apr 14 13:28:14.637751 containerd[1579]: time="2026-04-14T13:28:14.632886398Z" level=info msg="StopContainer for \"c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc\" with timeout 30 (s)" Apr 14 13:28:14.637751 containerd[1579]: time="2026-04-14T13:28:14.634192908Z" level=info msg="Stop container \"c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc\" with signal terminated" Apr 14 13:28:16.148983 containerd[1579]: time="2026-04-14T13:28:16.118655641Z" level=info msg="StopContainer for \"396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc\" with timeout 2 (s)" Apr 14 13:28:16.331694 sshd[5169]: pam_unix(sshd:session): session closed for user core Apr 14 13:28:16.460610 containerd[1579]: time="2026-04-14T13:28:16.287121064Z" level=info msg="Stop container \"396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc\" with signal terminated" Apr 14 13:28:16.687420 systemd[1]: Started sshd@28-10.0.0.13:22-10.0.0.1:41158.service - OpenSSH per-connection server daemon (10.0.0.1:41158). Apr 14 13:28:17.170283 systemd[1]: sshd@27-10.0.0.13:22-10.0.0.1:60284.service: Deactivated successfully. Apr 14 13:28:17.208501 systemd[1]: session-28.scope: Deactivated successfully. Apr 14 13:28:17.226164 kubelet[2702]: E0414 13:28:17.209773 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.108s" Apr 14 13:28:17.351909 systemd-logind[1561]: Session 28 logged out. Waiting for processes to exit. Apr 14 13:28:17.356028 systemd-logind[1561]: Removed session 28. Apr 14 13:28:17.989786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc-rootfs.mount: Deactivated successfully. Apr 14 13:28:18.145902 sshd[5258]: Accepted publickey for core from 10.0.0.1 port 41158 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:28:18.131953 systemd-networkd[1253]: lxc_health: Link DOWN Apr 14 13:28:18.130911 sshd[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:28:18.131957 systemd-networkd[1253]: lxc_health: Lost carrier Apr 14 13:28:18.347475 containerd[1579]: time="2026-04-14T13:28:18.340940619Z" level=info msg="shim disconnected" id=c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc namespace=k8s.io Apr 14 13:28:18.361370 containerd[1579]: time="2026-04-14T13:28:18.358755721Z" level=warning msg="cleaning up after shim disconnected" id=c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc namespace=k8s.io Apr 14 13:28:18.364637 containerd[1579]: time="2026-04-14T13:28:18.363878804Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:28:18.657440 systemd-logind[1561]: New session 29 of user core. Apr 14 13:28:18.671163 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 14 13:28:19.260766 containerd[1579]: time="2026-04-14T13:28:19.257853924Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 14 13:28:19.487849 containerd[1579]: time="2026-04-14T13:28:19.487476375Z" level=info msg="StopContainer for \"c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc\" returns successfully" Apr 14 13:28:19.491769 containerd[1579]: time="2026-04-14T13:28:19.491732515Z" level=info msg="StopPodSandbox for \"b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9\"" Apr 14 13:28:19.493451 containerd[1579]: time="2026-04-14T13:28:19.491820902Z" level=info msg="Container to stop \"c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 13:28:19.493451 containerd[1579]: time="2026-04-14T13:28:19.491833399Z" level=info msg="Container to stop \"67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 13:28:19.500817 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9-shm.mount: Deactivated successfully. Apr 14 13:28:19.646039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9-rootfs.mount: Deactivated successfully. Apr 14 13:28:19.808762 containerd[1579]: time="2026-04-14T13:28:19.805669880Z" level=info msg="shim disconnected" id=b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9 namespace=k8s.io Apr 14 13:28:19.808762 containerd[1579]: time="2026-04-14T13:28:19.806212413Z" level=warning msg="cleaning up after shim disconnected" id=b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9 namespace=k8s.io Apr 14 13:28:19.808762 containerd[1579]: time="2026-04-14T13:28:19.806635176Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:28:20.071389 containerd[1579]: time="2026-04-14T13:28:20.058911261Z" level=info msg="TearDown network for sandbox \"b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9\" successfully" Apr 14 13:28:20.071389 containerd[1579]: time="2026-04-14T13:28:20.058951871Z" level=info msg="StopPodSandbox for \"b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9\" returns successfully" Apr 14 13:28:20.300870 kubelet[2702]: I0414 13:28:20.297390 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d09ec03a-4296-44dc-b569-d7f061ca22b0-cilium-config-path\") pod \"d09ec03a-4296-44dc-b569-d7f061ca22b0\" (UID: \"d09ec03a-4296-44dc-b569-d7f061ca22b0\") " Apr 14 13:28:20.388028 kubelet[2702]: I0414 13:28:20.345002 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7znzz\" (UniqueName: \"kubernetes.io/projected/d09ec03a-4296-44dc-b569-d7f061ca22b0-kube-api-access-7znzz\") pod \"d09ec03a-4296-44dc-b569-d7f061ca22b0\" (UID: \"d09ec03a-4296-44dc-b569-d7f061ca22b0\") " Apr 14 13:28:20.516432 containerd[1579]: time="2026-04-14T13:28:20.292291118Z" level=info msg="Kill container \"396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc\"" Apr 14 13:28:20.815519 kubelet[2702]: I0414 13:28:20.795295 2702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d09ec03a-4296-44dc-b569-d7f061ca22b0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d09ec03a-4296-44dc-b569-d7f061ca22b0" (UID: "d09ec03a-4296-44dc-b569-d7f061ca22b0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 14 13:28:20.848663 systemd[1]: var-lib-kubelet-pods-d09ec03a\x2d4296\x2d44dc\x2db569\x2dd7f061ca22b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7znzz.mount: Deactivated successfully. Apr 14 13:28:20.975355 systemd-networkd[1253]: lxc66f836e2314b: Link DOWN Apr 14 13:28:21.325021 kubelet[2702]: I0414 13:28:20.934240 2702 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d09ec03a-4296-44dc-b569-d7f061ca22b0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 14 13:28:21.325021 kubelet[2702]: I0414 13:28:21.141999 2702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d09ec03a-4296-44dc-b569-d7f061ca22b0-kube-api-access-7znzz" (OuterVolumeSpecName: "kube-api-access-7znzz") pod "d09ec03a-4296-44dc-b569-d7f061ca22b0" (UID: "d09ec03a-4296-44dc-b569-d7f061ca22b0"). InnerVolumeSpecName "kube-api-access-7znzz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 14 13:28:21.325021 kubelet[2702]: I0414 13:28:21.158682 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7znzz\" (UniqueName: \"kubernetes.io/projected/d09ec03a-4296-44dc-b569-d7f061ca22b0-kube-api-access-7znzz\") pod \"d09ec03a-4296-44dc-b569-d7f061ca22b0\" (UID: \"d09ec03a-4296-44dc-b569-d7f061ca22b0\") " Apr 14 13:28:21.325021 kubelet[2702]: W0414 13:28:21.159055 2702 empty_dir.go:511] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d09ec03a-4296-44dc-b569-d7f061ca22b0/volumes/kubernetes.io~projected/kube-api-access-7znzz Apr 14 13:28:21.325021 kubelet[2702]: I0414 13:28:21.159092 2702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d09ec03a-4296-44dc-b569-d7f061ca22b0-kube-api-access-7znzz" (OuterVolumeSpecName: "kube-api-access-7znzz") pod "d09ec03a-4296-44dc-b569-d7f061ca22b0" (UID: "d09ec03a-4296-44dc-b569-d7f061ca22b0"). InnerVolumeSpecName "kube-api-access-7znzz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 14 13:28:20.975359 systemd-networkd[1253]: lxc66f836e2314b: Lost carrier Apr 14 13:28:21.359665 containerd[1579]: time="2026-04-14T13:28:21.347998461Z" level=error msg="Failed to destroy network for sandbox \"391221e82bf626aa633432dd7fb399f8d88adf6a780e7814ac1e81a84e8ff58d\"" error="cni plugin not initialized" Apr 14 13:28:21.423486 kubelet[2702]: I0414 13:28:21.416176 2702 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7znzz\" (UniqueName: \"kubernetes.io/projected/d09ec03a-4296-44dc-b569-d7f061ca22b0-kube-api-access-7znzz\") on node \"localhost\" DevicePath \"\"" Apr 14 13:28:21.453049 systemd-networkd[1253]: lxc4c32fe9e2552: Link DOWN Apr 14 13:28:21.465870 systemd-networkd[1253]: lxc4c32fe9e2552: Lost carrier Apr 14 13:28:21.577895 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-391221e82bf626aa633432dd7fb399f8d88adf6a780e7814ac1e81a84e8ff58d-shm.mount: Deactivated successfully. Apr 14 13:28:21.587372 containerd[1579]: time="2026-04-14T13:28:21.586663036Z" level=error msg="encountered an error cleaning up failed sandbox \"391221e82bf626aa633432dd7fb399f8d88adf6a780e7814ac1e81a84e8ff58d\", marking sandbox state as SANDBOX_UNKNOWN" error="cni plugin not initialized" Apr 14 13:28:21.587372 containerd[1579]: time="2026-04-14T13:28:21.587226597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-jc82b,Uid:35ec3304-c011-45a9-8315-a39d73673d17,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"391221e82bf626aa633432dd7fb399f8d88adf6a780e7814ac1e81a84e8ff58d\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" Apr 14 13:28:21.690515 containerd[1579]: time="2026-04-14T13:28:21.686383472Z" level=error msg="Failed to destroy network for sandbox \"4512cf88c16acc194d0d5e71b52e3c91cd1f728638b3fb7b94888c788585d100\"" error="cni plugin not initialized" Apr 14 13:28:21.710395 kubelet[2702]: E0414 13:28:21.710104 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"391221e82bf626aa633432dd7fb399f8d88adf6a780e7814ac1e81a84e8ff58d\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" Apr 14 13:28:21.725233 kubelet[2702]: E0414 13:28:21.710658 2702 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"391221e82bf626aa633432dd7fb399f8d88adf6a780e7814ac1e81a84e8ff58d\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" pod="kube-system/coredns-674b8bbfcf-jc82b" Apr 14 13:28:21.725233 kubelet[2702]: E0414 13:28:21.710715 2702 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"391221e82bf626aa633432dd7fb399f8d88adf6a780e7814ac1e81a84e8ff58d\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Put \"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\": EOF" pod="kube-system/coredns-674b8bbfcf-jc82b" Apr 14 13:28:21.725233 kubelet[2702]: E0414 13:28:21.710894 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-jc82b_kube-system(35ec3304-c011-45a9-8315-a39d73673d17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-jc82b_kube-system(35ec3304-c011-45a9-8315-a39d73673d17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"391221e82bf626aa633432dd7fb399f8d88adf6a780e7814ac1e81a84e8ff58d\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): Unable to create endpoint: Put \\\"http:///var/run/cilium/cilium.sock/v1/endpoint/cilium-local:0\\\": EOF\"" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:28:21.710678 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4512cf88c16acc194d0d5e71b52e3c91cd1f728638b3fb7b94888c788585d100-shm.mount: Deactivated successfully. Apr 14 13:28:21.726165 containerd[1579]: time="2026-04-14T13:28:21.725959528Z" level=error msg="encountered an error cleaning up failed sandbox \"4512cf88c16acc194d0d5e71b52e3c91cd1f728638b3fb7b94888c788585d100\", marking sandbox state as SANDBOX_UNKNOWN" error="cni plugin not initialized" Apr 14 13:28:21.726309 containerd[1579]: time="2026-04-14T13:28:21.726272622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-hld84,Uid:67191550-1737-456e-a429-02a63e9c7256,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4512cf88c16acc194d0d5e71b52e3c91cd1f728638b3fb7b94888c788585d100\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 14 13:28:21.935999 kubelet[2702]: E0414 13:28:21.923054 2702 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4512cf88c16acc194d0d5e71b52e3c91cd1f728638b3fb7b94888c788585d100\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" Apr 14 13:28:22.133207 kubelet[2702]: E0414 13:28:22.129406 2702 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4512cf88c16acc194d0d5e71b52e3c91cd1f728638b3fb7b94888c788585d100\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-674b8bbfcf-hld84" Apr 14 13:28:22.309118 kubelet[2702]: E0414 13:28:22.294223 2702 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4512cf88c16acc194d0d5e71b52e3c91cd1f728638b3fb7b94888c788585d100\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded" pod="kube-system/coredns-674b8bbfcf-hld84" Apr 14 13:28:22.525979 kubelet[2702]: E0414 13:28:22.525352 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-hld84_kube-system(67191550-1737-456e-a429-02a63e9c7256)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-hld84_kube-system(67191550-1737-456e-a429-02a63e9c7256)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4512cf88c16acc194d0d5e71b52e3c91cd1f728638b3fb7b94888c788585d100\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): Unable to create endpoint: Cilium API client timeout exceeded\"" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:28:22.565901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc-rootfs.mount: Deactivated successfully. Apr 14 13:28:22.889974 kubelet[2702]: E0414 13:28:22.859251 2702 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:28:23.045103 containerd[1579]: time="2026-04-14T13:28:23.039584873Z" level=info msg="shim disconnected" id=396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc namespace=k8s.io Apr 14 13:28:23.045103 containerd[1579]: time="2026-04-14T13:28:23.040720116Z" level=warning msg="cleaning up after shim disconnected" id=396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc namespace=k8s.io Apr 14 13:28:23.045103 containerd[1579]: time="2026-04-14T13:28:23.040736288Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:28:23.227932 kubelet[2702]: I0414 13:28:23.227471 2702 scope.go:117] "RemoveContainer" containerID="c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc" Apr 14 13:28:23.236491 containerd[1579]: time="2026-04-14T13:28:23.236414665Z" level=info msg="RemoveContainer for \"c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc\"" Apr 14 13:28:23.252202 containerd[1579]: time="2026-04-14T13:28:23.252048204Z" level=info msg="RemoveContainer for \"c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc\" returns successfully" Apr 14 13:28:23.399677 kubelet[2702]: I0414 13:28:23.399123 2702 scope.go:117] "RemoveContainer" containerID="67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443" Apr 14 13:28:23.804999 containerd[1579]: time="2026-04-14T13:28:23.804798411Z" level=info msg="StopContainer for \"396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc\" returns successfully" Apr 14 13:28:23.986798 containerd[1579]: time="2026-04-14T13:28:23.979318593Z" level=info msg="StopPodSandbox for \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\"" Apr 14 13:28:23.986798 containerd[1579]: time="2026-04-14T13:28:23.979307529Z" level=info msg="RemoveContainer for \"67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443\"" Apr 14 13:28:23.986798 containerd[1579]: time="2026-04-14T13:28:23.979739132Z" level=info msg="Container to stop \"6ae92caa6a7a43bc732a7b205b970dd5e5a4d20947c6cee2d194744f4879c8cc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 13:28:23.986798 containerd[1579]: time="2026-04-14T13:28:23.979760853Z" level=info msg="Container to stop \"424489f34fdce4b7743e40c745ea1195c9a2c8bd6fc0ec17242fed31716fae7e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 13:28:23.986798 containerd[1579]: time="2026-04-14T13:28:23.979825716Z" level=info msg="Container to stop \"8e208f3fa3ea14c6f3cf99273c6e5fa885b27ea1f81367fe4db514a2cffba3dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 13:28:23.986798 containerd[1579]: time="2026-04-14T13:28:23.979839897Z" level=info msg="Container to stop \"88227698bc8072666b0c461164a87b241edcc52a7315d816154e503d3be81884\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 13:28:23.986798 containerd[1579]: time="2026-04-14T13:28:23.979851345Z" level=info msg="Container to stop \"396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 14 13:28:24.133827 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90-shm.mount: Deactivated successfully. Apr 14 13:28:24.248590 containerd[1579]: time="2026-04-14T13:28:24.248319294Z" level=info msg="RemoveContainer for \"67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443\" returns successfully" Apr 14 13:28:24.251033 kubelet[2702]: I0414 13:28:24.250920 2702 scope.go:117] "RemoveContainer" containerID="c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc" Apr 14 13:28:24.252381 kubelet[2702]: I0414 13:28:24.252348 2702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d09ec03a-4296-44dc-b569-d7f061ca22b0" path="/var/lib/kubelet/pods/d09ec03a-4296-44dc-b569-d7f061ca22b0/volumes" Apr 14 13:28:24.379629 containerd[1579]: time="2026-04-14T13:28:24.379342407Z" level=error msg="ContainerStatus for \"c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc\": not found" Apr 14 13:28:24.504167 kubelet[2702]: E0414 13:28:24.452326 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc\": not found" containerID="c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc" Apr 14 13:28:24.544989 kubelet[2702]: I0414 13:28:24.519383 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc"} err="failed to get container status \"c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"c16dc8aba62071310b2dfebeb10f4f7bd8f4ad3decb26da2592c7faba4e0b7dc\": not found" Apr 14 13:28:24.558573 kubelet[2702]: I0414 13:28:24.558345 2702 scope.go:117] "RemoveContainer" containerID="67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443" Apr 14 13:28:24.678714 containerd[1579]: time="2026-04-14T13:28:24.676844346Z" level=error msg="ContainerStatus for \"67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443\": not found" Apr 14 13:28:24.725384 kubelet[2702]: E0414 13:28:24.717407 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443\": not found" containerID="67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443" Apr 14 13:28:24.780392 kubelet[2702]: I0414 13:28:24.760560 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443"} err="failed to get container status \"67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443\": rpc error: code = NotFound desc = an error occurred when try to find container \"67e26038ab196c8ce13fc519fd5b202f6fce4afce97422e73c8fd58a4efce443\": not found" Apr 14 13:28:25.147241 containerd[1579]: io.containerd.runc.v2: remove /run/containerd/s/529ca28e72bbdf5cd27ee23ef8448086a5f34e6a88a5ea7d9721a6cce6eaa62e: no such file or directorytime="2026-04-14T13:28:25.147016369Z" level=info msg="shim disconnected" id=02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90 namespace=k8s.io Apr 14 13:28:25.165224 containerd[1579]: time="2026-04-14T13:28:25.160489652Z" level=warning msg="cleaning up after shim disconnected" id=02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90 namespace=k8s.io Apr 14 13:28:25.170434 containerd[1579]: time="2026-04-14T13:28:25.169685741Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:28:25.170881 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90-rootfs.mount: Deactivated successfully. Apr 14 13:28:25.551218 containerd[1579]: time="2026-04-14T13:28:25.481574203Z" level=info msg="TearDown network for sandbox \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\" successfully" Apr 14 13:28:25.551218 containerd[1579]: time="2026-04-14T13:28:25.551122468Z" level=info msg="StopPodSandbox for \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\" returns successfully" Apr 14 13:28:25.570910 kubelet[2702]: E0414 13:28:25.568311 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:28:25.573859 kubelet[2702]: I0414 13:28:25.572163 2702 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="391221e82bf626aa633432dd7fb399f8d88adf6a780e7814ac1e81a84e8ff58d" Apr 14 13:28:25.796646 kubelet[2702]: I0414 13:28:25.788476 2702 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4512cf88c16acc194d0d5e71b52e3c91cd1f728638b3fb7b94888c788585d100" Apr 14 13:28:25.815022 kubelet[2702]: E0414 13:28:25.807375 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:28:26.123276 sshd[5258]: pam_unix(sshd:session): session closed for user core Apr 14 13:28:26.423369 systemd[1]: Started sshd@29-10.0.0.13:22-10.0.0.1:53574.service - OpenSSH per-connection server daemon (10.0.0.1:53574). Apr 14 13:28:26.448317 kubelet[2702]: I0414 13:28:26.423828 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-host-proc-sys-kernel\") pod \"29979523-b9ef-4e95-ada7-13b2d8b91c40\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " Apr 14 13:28:26.448317 kubelet[2702]: I0414 13:28:26.423924 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-etc-cni-netd\") pod \"29979523-b9ef-4e95-ada7-13b2d8b91c40\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " Apr 14 13:28:26.448317 kubelet[2702]: I0414 13:28:26.423991 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29979523-b9ef-4e95-ada7-13b2d8b91c40-clustermesh-secrets\") pod \"29979523-b9ef-4e95-ada7-13b2d8b91c40\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " Apr 14 13:28:26.448317 kubelet[2702]: I0414 13:28:26.424006 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29979523-b9ef-4e95-ada7-13b2d8b91c40-hubble-tls\") pod \"29979523-b9ef-4e95-ada7-13b2d8b91c40\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " Apr 14 13:28:26.448317 kubelet[2702]: I0414 13:28:26.424017 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-cilium-run\") pod \"29979523-b9ef-4e95-ada7-13b2d8b91c40\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " Apr 14 13:28:26.448317 kubelet[2702]: I0414 13:28:26.424027 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-cilium-cgroup\") pod \"29979523-b9ef-4e95-ada7-13b2d8b91c40\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " Apr 14 13:28:26.424457 systemd[1]: sshd@28-10.0.0.13:22-10.0.0.1:41158.service: Deactivated successfully. Apr 14 13:28:26.449223 kubelet[2702]: I0414 13:28:26.424041 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j25gm\" (UniqueName: \"kubernetes.io/projected/29979523-b9ef-4e95-ada7-13b2d8b91c40-kube-api-access-j25gm\") pod \"29979523-b9ef-4e95-ada7-13b2d8b91c40\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " Apr 14 13:28:26.449223 kubelet[2702]: I0414 13:28:26.424078 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-host-proc-sys-net\") pod \"29979523-b9ef-4e95-ada7-13b2d8b91c40\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " Apr 14 13:28:26.449223 kubelet[2702]: I0414 13:28:26.424110 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-lib-modules\") pod \"29979523-b9ef-4e95-ada7-13b2d8b91c40\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " Apr 14 13:28:26.449223 kubelet[2702]: I0414 13:28:26.424139 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29979523-b9ef-4e95-ada7-13b2d8b91c40-cilium-config-path\") pod \"29979523-b9ef-4e95-ada7-13b2d8b91c40\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " Apr 14 13:28:26.449223 kubelet[2702]: I0414 13:28:26.424189 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-xtables-lock\") pod \"29979523-b9ef-4e95-ada7-13b2d8b91c40\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " Apr 14 13:28:26.449223 kubelet[2702]: I0414 13:28:26.424209 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-cni-path\") pod \"29979523-b9ef-4e95-ada7-13b2d8b91c40\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " Apr 14 13:28:26.449385 kubelet[2702]: I0414 13:28:26.424225 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-hostproc\") pod \"29979523-b9ef-4e95-ada7-13b2d8b91c40\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " Apr 14 13:28:26.449385 kubelet[2702]: I0414 13:28:26.424248 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-bpf-maps\") pod \"29979523-b9ef-4e95-ada7-13b2d8b91c40\" (UID: \"29979523-b9ef-4e95-ada7-13b2d8b91c40\") " Apr 14 13:28:26.450688 kubelet[2702]: I0414 13:28:26.450450 2702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "29979523-b9ef-4e95-ada7-13b2d8b91c40" (UID: "29979523-b9ef-4e95-ada7-13b2d8b91c40"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 13:28:26.454318 kubelet[2702]: I0414 13:28:26.450908 2702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "29979523-b9ef-4e95-ada7-13b2d8b91c40" (UID: "29979523-b9ef-4e95-ada7-13b2d8b91c40"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 13:28:26.454318 kubelet[2702]: I0414 13:28:26.450937 2702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "29979523-b9ef-4e95-ada7-13b2d8b91c40" (UID: "29979523-b9ef-4e95-ada7-13b2d8b91c40"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 13:28:26.555467 kubelet[2702]: I0414 13:28:26.546798 2702 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-bpf-maps\") on node \"localhost\" DevicePath \"\"" Apr 14 13:28:26.555467 kubelet[2702]: I0414 13:28:26.547260 2702 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Apr 14 13:28:26.555467 kubelet[2702]: I0414 13:28:26.547282 2702 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Apr 14 13:28:26.587710 systemd[1]: session-29.scope: Deactivated successfully. Apr 14 13:28:26.743171 systemd-logind[1561]: Session 29 logged out. Waiting for processes to exit. Apr 14 13:28:26.764432 systemd-logind[1561]: Removed session 29. Apr 14 13:28:26.842047 kubelet[2702]: I0414 13:28:26.841474 2702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "29979523-b9ef-4e95-ada7-13b2d8b91c40" (UID: "29979523-b9ef-4e95-ada7-13b2d8b91c40"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 13:28:27.108320 systemd[1]: var-lib-kubelet-pods-29979523\x2db9ef\x2d4e95\x2dada7\x2d13b2d8b91c40-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Apr 14 13:28:27.474293 kubelet[2702]: I0414 13:28:27.473707 2702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "29979523-b9ef-4e95-ada7-13b2d8b91c40" (UID: "29979523-b9ef-4e95-ada7-13b2d8b91c40"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 13:28:27.475572 kubelet[2702]: I0414 13:28:27.475499 2702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "29979523-b9ef-4e95-ada7-13b2d8b91c40" (UID: "29979523-b9ef-4e95-ada7-13b2d8b91c40"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 13:28:27.520962 kubelet[2702]: I0414 13:28:27.516597 2702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-cni-path" (OuterVolumeSpecName: "cni-path") pod "29979523-b9ef-4e95-ada7-13b2d8b91c40" (UID: "29979523-b9ef-4e95-ada7-13b2d8b91c40"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 13:28:27.540478 sshd[5421]: Accepted publickey for core from 10.0.0.1 port 53574 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:28:27.655367 systemd[1]: var-lib-kubelet-pods-29979523\x2db9ef\x2d4e95\x2dada7\x2d13b2d8b91c40-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Apr 14 13:28:27.941488 systemd[1]: var-lib-kubelet-pods-29979523\x2db9ef\x2d4e95\x2dada7\x2d13b2d8b91c40-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj25gm.mount: Deactivated successfully. Apr 14 13:28:27.969452 sshd[5421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:28:28.060916 kubelet[2702]: E0414 13:28:28.058733 2702 manager.go:1116] Failed to create existing container: /kubepods/burstable/pod29979523-b9ef-4e95-ada7-13b2d8b91c40/396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc: task 396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc not found Apr 14 13:28:28.120075 kubelet[2702]: I0414 13:28:27.538348 2702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "29979523-b9ef-4e95-ada7-13b2d8b91c40" (UID: "29979523-b9ef-4e95-ada7-13b2d8b91c40"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 13:28:28.148494 kubelet[2702]: I0414 13:28:28.143269 2702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29979523-b9ef-4e95-ada7-13b2d8b91c40-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "29979523-b9ef-4e95-ada7-13b2d8b91c40" (UID: "29979523-b9ef-4e95-ada7-13b2d8b91c40"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 14 13:28:28.153106 kubelet[2702]: I0414 13:28:27.915025 2702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "29979523-b9ef-4e95-ada7-13b2d8b91c40" (UID: "29979523-b9ef-4e95-ada7-13b2d8b91c40"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 13:28:28.153739 kubelet[2702]: I0414 13:28:28.153404 2702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29979523-b9ef-4e95-ada7-13b2d8b91c40-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "29979523-b9ef-4e95-ada7-13b2d8b91c40" (UID: "29979523-b9ef-4e95-ada7-13b2d8b91c40"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 14 13:28:28.153871 kubelet[2702]: I0414 13:28:28.153450 2702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29979523-b9ef-4e95-ada7-13b2d8b91c40-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "29979523-b9ef-4e95-ada7-13b2d8b91c40" (UID: "29979523-b9ef-4e95-ada7-13b2d8b91c40"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 14 13:28:28.154688 kubelet[2702]: I0414 13:28:28.153234 2702 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Apr 14 13:28:28.157699 kubelet[2702]: I0414 13:28:28.154885 2702 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-lib-modules\") on node \"localhost\" DevicePath \"\"" Apr 14 13:28:28.157699 kubelet[2702]: I0414 13:28:28.154897 2702 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-cni-path\") on node \"localhost\" DevicePath \"\"" Apr 14 13:28:28.157699 kubelet[2702]: I0414 13:28:28.154905 2702 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29979523-b9ef-4e95-ada7-13b2d8b91c40-hubble-tls\") on node \"localhost\" DevicePath \"\"" Apr 14 13:28:28.157699 kubelet[2702]: I0414 13:28:28.154912 2702 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-cilium-run\") on node \"localhost\" DevicePath \"\"" Apr 14 13:28:28.157699 kubelet[2702]: I0414 13:28:28.154922 2702 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Apr 14 13:28:28.157699 kubelet[2702]: I0414 13:28:28.155484 2702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-hostproc" (OuterVolumeSpecName: "hostproc") pod "29979523-b9ef-4e95-ada7-13b2d8b91c40" (UID: "29979523-b9ef-4e95-ada7-13b2d8b91c40"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Apr 14 13:28:28.157699 kubelet[2702]: I0414 13:28:28.157034 2702 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29979523-b9ef-4e95-ada7-13b2d8b91c40-kube-api-access-j25gm" (OuterVolumeSpecName: "kube-api-access-j25gm") pod "29979523-b9ef-4e95-ada7-13b2d8b91c40" (UID: "29979523-b9ef-4e95-ada7-13b2d8b91c40"). InnerVolumeSpecName "kube-api-access-j25gm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 14 13:28:28.155685 systemd-logind[1561]: New session 30 of user core. Apr 14 13:28:28.164664 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 14 13:28:28.220605 kubelet[2702]: E0414 13:28:28.169820 2702 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:28:28.368847 kubelet[2702]: I0414 13:28:28.341349 2702 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-hostproc\") on node \"localhost\" DevicePath \"\"" Apr 14 13:28:28.383834 kubelet[2702]: I0414 13:28:28.379265 2702 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29979523-b9ef-4e95-ada7-13b2d8b91c40-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Apr 14 13:28:28.435331 kubelet[2702]: I0414 13:28:28.425905 2702 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j25gm\" (UniqueName: \"kubernetes.io/projected/29979523-b9ef-4e95-ada7-13b2d8b91c40-kube-api-access-j25gm\") on node \"localhost\" DevicePath \"\"" Apr 14 13:28:28.435331 kubelet[2702]: I0414 13:28:28.426120 2702 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29979523-b9ef-4e95-ada7-13b2d8b91c40-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Apr 14 13:28:28.435331 kubelet[2702]: I0414 13:28:28.426172 2702 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29979523-b9ef-4e95-ada7-13b2d8b91c40-xtables-lock\") on node \"localhost\" DevicePath \"\"" Apr 14 13:28:28.681296 sshd[5421]: pam_unix(sshd:session): session closed for user core Apr 14 13:28:29.080679 systemd[1]: sshd@29-10.0.0.13:22-10.0.0.1:53574.service: Deactivated successfully. Apr 14 13:28:29.247776 systemd[1]: session-30.scope: Deactivated successfully. Apr 14 13:28:29.476699 systemd-logind[1561]: Session 30 logged out. Waiting for processes to exit. Apr 14 13:28:29.565448 systemd[1]: Started sshd@30-10.0.0.13:22-10.0.0.1:53578.service - OpenSSH per-connection server daemon (10.0.0.1:53578). Apr 14 13:28:29.717611 kubelet[2702]: E0414 13:28:29.715976 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.677s" Apr 14 13:28:29.734761 systemd-logind[1561]: Removed session 30. Apr 14 13:28:30.442075 kubelet[2702]: I0414 13:28:30.441712 2702 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-04-14T13:28:30Z","lastTransitionTime":"2026-04-14T13:28:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Apr 14 13:28:30.783675 kubelet[2702]: I0414 13:28:30.781670 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a2d8fcdb-1f86-4fba-ad54-71dd33b85848-host-proc-sys-net\") pod \"cilium-wltbk\" (UID: \"a2d8fcdb-1f86-4fba-ad54-71dd33b85848\") " pod="kube-system/cilium-wltbk" Apr 14 13:28:30.806842 sshd[5435]: Accepted publickey for core from 10.0.0.1 port 53578 ssh2: RSA SHA256:yxEwPRvngNLBDT3pQyHsh993EalqAg0K7yQpushVk/s Apr 14 13:28:30.838418 kubelet[2702]: I0414 13:28:30.838124 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2d8fcdb-1f86-4fba-ad54-71dd33b85848-xtables-lock\") pod \"cilium-wltbk\" (UID: \"a2d8fcdb-1f86-4fba-ad54-71dd33b85848\") " pod="kube-system/cilium-wltbk" Apr 14 13:28:30.858947 sshd[5435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 13:28:31.863782 systemd-logind[1561]: New session 31 of user core. Apr 14 13:28:31.895113 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 14 13:28:32.504765 kubelet[2702]: I0414 13:28:32.489954 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a2d8fcdb-1f86-4fba-ad54-71dd33b85848-cilium-config-path\") pod \"cilium-wltbk\" (UID: \"a2d8fcdb-1f86-4fba-ad54-71dd33b85848\") " pod="kube-system/cilium-wltbk" Apr 14 13:28:33.681462 kubelet[2702]: I0414 13:28:33.673026 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a2d8fcdb-1f86-4fba-ad54-71dd33b85848-cilium-ipsec-secrets\") pod \"cilium-wltbk\" (UID: \"a2d8fcdb-1f86-4fba-ad54-71dd33b85848\") " pod="kube-system/cilium-wltbk" Apr 14 13:28:33.681462 kubelet[2702]: I0414 13:28:33.673593 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a2d8fcdb-1f86-4fba-ad54-71dd33b85848-hubble-tls\") pod \"cilium-wltbk\" (UID: \"a2d8fcdb-1f86-4fba-ad54-71dd33b85848\") " pod="kube-system/cilium-wltbk" Apr 14 13:28:33.932619 kubelet[2702]: E0414 13:28:33.898127 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.813s" Apr 14 13:28:34.160327 kubelet[2702]: I0414 13:28:33.983151 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbc89\" (UniqueName: \"kubernetes.io/projected/a2d8fcdb-1f86-4fba-ad54-71dd33b85848-kube-api-access-pbc89\") pod \"cilium-wltbk\" (UID: \"a2d8fcdb-1f86-4fba-ad54-71dd33b85848\") " pod="kube-system/cilium-wltbk" Apr 14 13:28:34.370519 kubelet[2702]: I0414 13:28:34.360147 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a2d8fcdb-1f86-4fba-ad54-71dd33b85848-cni-path\") pod \"cilium-wltbk\" (UID: \"a2d8fcdb-1f86-4fba-ad54-71dd33b85848\") " pod="kube-system/cilium-wltbk" Apr 14 13:28:34.843648 kubelet[2702]: E0414 13:28:34.843289 2702 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:28:36.175982 kubelet[2702]: E0414 13:28:36.061083 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:28:36.294265 kubelet[2702]: E0414 13:28:36.179028 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.706s" Apr 14 13:28:36.342276 kubelet[2702]: I0414 13:28:36.293268 2702 scope.go:117] "RemoveContainer" containerID="8e208f3fa3ea14c6f3cf99273c6e5fa885b27ea1f81367fe4db514a2cffba3dd" Apr 14 13:28:36.391290 kubelet[2702]: I0414 13:28:36.224438 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a2d8fcdb-1f86-4fba-ad54-71dd33b85848-etc-cni-netd\") pod \"cilium-wltbk\" (UID: \"a2d8fcdb-1f86-4fba-ad54-71dd33b85848\") " pod="kube-system/cilium-wltbk" Apr 14 13:28:37.295126 kubelet[2702]: I0414 13:28:37.291287 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a2d8fcdb-1f86-4fba-ad54-71dd33b85848-hostproc\") pod \"cilium-wltbk\" (UID: \"a2d8fcdb-1f86-4fba-ad54-71dd33b85848\") " pod="kube-system/cilium-wltbk" Apr 14 13:28:37.330292 kubelet[2702]: E0414 13:28:37.327929 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:28:38.078976 kubelet[2702]: I0414 13:28:38.047117 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a2d8fcdb-1f86-4fba-ad54-71dd33b85848-host-proc-sys-kernel\") pod \"cilium-wltbk\" (UID: \"a2d8fcdb-1f86-4fba-ad54-71dd33b85848\") " pod="kube-system/cilium-wltbk" Apr 14 13:28:40.625077 kubelet[2702]: I0414 13:28:40.231393 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a2d8fcdb-1f86-4fba-ad54-71dd33b85848-bpf-maps\") pod \"cilium-wltbk\" (UID: \"a2d8fcdb-1f86-4fba-ad54-71dd33b85848\") " pod="kube-system/cilium-wltbk" Apr 14 13:28:41.315260 kubelet[2702]: I0414 13:28:41.310527 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a2d8fcdb-1f86-4fba-ad54-71dd33b85848-cilium-cgroup\") pod \"cilium-wltbk\" (UID: \"a2d8fcdb-1f86-4fba-ad54-71dd33b85848\") " pod="kube-system/cilium-wltbk" Apr 14 13:28:41.659507 kubelet[2702]: I0414 13:28:41.655590 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2d8fcdb-1f86-4fba-ad54-71dd33b85848-lib-modules\") pod \"cilium-wltbk\" (UID: \"a2d8fcdb-1f86-4fba-ad54-71dd33b85848\") " pod="kube-system/cilium-wltbk" Apr 14 13:28:41.961007 kubelet[2702]: I0414 13:28:41.759519 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a2d8fcdb-1f86-4fba-ad54-71dd33b85848-cilium-run\") pod \"cilium-wltbk\" (UID: \"a2d8fcdb-1f86-4fba-ad54-71dd33b85848\") " pod="kube-system/cilium-wltbk" Apr 14 13:28:42.016529 containerd[1579]: time="2026-04-14T13:28:41.985482800Z" level=info msg="RemoveContainer for \"8e208f3fa3ea14c6f3cf99273c6e5fa885b27ea1f81367fe4db514a2cffba3dd\"" Apr 14 13:28:42.070836 kubelet[2702]: I0414 13:28:42.064057 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a2d8fcdb-1f86-4fba-ad54-71dd33b85848-clustermesh-secrets\") pod \"cilium-wltbk\" (UID: \"a2d8fcdb-1f86-4fba-ad54-71dd33b85848\") " pod="kube-system/cilium-wltbk" Apr 14 13:28:42.214878 containerd[1579]: time="2026-04-14T13:28:42.194411709Z" level=info msg="RemoveContainer for \"8e208f3fa3ea14c6f3cf99273c6e5fa885b27ea1f81367fe4db514a2cffba3dd\" returns successfully" Apr 14 13:28:42.382529 kubelet[2702]: I0414 13:28:42.330817 2702 scope.go:117] "RemoveContainer" containerID="88227698bc8072666b0c461164a87b241edcc52a7315d816154e503d3be81884" Apr 14 13:28:42.394669 kubelet[2702]: E0414 13:28:42.382257 2702 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:28:42.462214 kubelet[2702]: E0414 13:28:42.462066 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.168s" Apr 14 13:28:42.526098 kubelet[2702]: I0414 13:28:42.507973 2702 scope.go:117] "RemoveContainer" containerID="396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc" Apr 14 13:28:42.862798 containerd[1579]: time="2026-04-14T13:28:42.855702085Z" level=info msg="RemoveContainer for \"88227698bc8072666b0c461164a87b241edcc52a7315d816154e503d3be81884\"" Apr 14 13:28:43.354074 containerd[1579]: time="2026-04-14T13:28:43.352862929Z" level=info msg="RemoveContainer for \"88227698bc8072666b0c461164a87b241edcc52a7315d816154e503d3be81884\" returns successfully" Apr 14 13:28:43.356066 containerd[1579]: time="2026-04-14T13:28:43.355434909Z" level=info msg="RemoveContainer for \"396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc\"" Apr 14 13:28:43.356344 kubelet[2702]: E0414 13:28:43.356071 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:28:43.375381 kubelet[2702]: E0414 13:28:43.374801 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:28:43.386732 kubelet[2702]: I0414 13:28:43.384903 2702 scope.go:117] "RemoveContainer" containerID="396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc" Apr 14 13:28:43.463779 containerd[1579]: time="2026-04-14T13:28:43.460385977Z" level=info msg="RemoveContainer for \"396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc\" returns successfully" Apr 14 13:28:43.593834 containerd[1579]: time="2026-04-14T13:28:43.593655127Z" level=error msg="ContainerStatus for \"396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc\": not found" Apr 14 13:28:43.744324 kubelet[2702]: I0414 13:28:43.743767 2702 scope.go:117] "RemoveContainer" containerID="8e208f3fa3ea14c6f3cf99273c6e5fa885b27ea1f81367fe4db514a2cffba3dd" Apr 14 13:28:43.746897 kubelet[2702]: E0414 13:28:43.745225 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc\": not found" containerID="396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc" Apr 14 13:28:43.746897 kubelet[2702]: E0414 13:28:43.746602 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:28:43.751966 containerd[1579]: time="2026-04-14T13:28:43.747898962Z" level=info msg="RemoveContainer for \"396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc\"" Apr 14 13:28:43.751966 containerd[1579]: time="2026-04-14T13:28:43.747942578Z" level=info msg="RemoveContainer for \"396ce981175b43abc89bf3d313a149d8f1040e550472b2c7961acea4f17583bc\" returns successfully" Apr 14 13:28:43.757120 containerd[1579]: time="2026-04-14T13:28:43.755463638Z" level=error msg="ContainerStatus for \"8e208f3fa3ea14c6f3cf99273c6e5fa885b27ea1f81367fe4db514a2cffba3dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e208f3fa3ea14c6f3cf99273c6e5fa885b27ea1f81367fe4db514a2cffba3dd\": not found" Apr 14 13:28:43.757255 kubelet[2702]: E0414 13:28:43.756845 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e208f3fa3ea14c6f3cf99273c6e5fa885b27ea1f81367fe4db514a2cffba3dd\": not found" containerID="8e208f3fa3ea14c6f3cf99273c6e5fa885b27ea1f81367fe4db514a2cffba3dd" Apr 14 13:28:43.757255 kubelet[2702]: I0414 13:28:43.757000 2702 scope.go:117] "RemoveContainer" containerID="6ae92caa6a7a43bc732a7b205b970dd5e5a4d20947c6cee2d194744f4879c8cc" Apr 14 13:28:43.757255 kubelet[2702]: I0414 13:28:43.757045 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e208f3fa3ea14c6f3cf99273c6e5fa885b27ea1f81367fe4db514a2cffba3dd"} err="failed to get container status \"8e208f3fa3ea14c6f3cf99273c6e5fa885b27ea1f81367fe4db514a2cffba3dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e208f3fa3ea14c6f3cf99273c6e5fa885b27ea1f81367fe4db514a2cffba3dd\": not found" Apr 14 13:28:43.757255 kubelet[2702]: I0414 13:28:43.757195 2702 scope.go:117] "RemoveContainer" containerID="88227698bc8072666b0c461164a87b241edcc52a7315d816154e503d3be81884" Apr 14 13:28:43.759110 containerd[1579]: time="2026-04-14T13:28:43.759032424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wltbk,Uid:a2d8fcdb-1f86-4fba-ad54-71dd33b85848,Namespace:kube-system,Attempt:0,}" Apr 14 13:28:43.935928 containerd[1579]: time="2026-04-14T13:28:43.935648171Z" level=error msg="ContainerStatus for \"88227698bc8072666b0c461164a87b241edcc52a7315d816154e503d3be81884\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"88227698bc8072666b0c461164a87b241edcc52a7315d816154e503d3be81884\": not found" Apr 14 13:28:44.121947 kubelet[2702]: E0414 13:28:44.120651 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"88227698bc8072666b0c461164a87b241edcc52a7315d816154e503d3be81884\": not found" containerID="88227698bc8072666b0c461164a87b241edcc52a7315d816154e503d3be81884" Apr 14 13:28:44.121947 kubelet[2702]: I0414 13:28:44.121167 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"88227698bc8072666b0c461164a87b241edcc52a7315d816154e503d3be81884"} err="failed to get container status \"88227698bc8072666b0c461164a87b241edcc52a7315d816154e503d3be81884\": rpc error: code = NotFound desc = an error occurred when try to find container \"88227698bc8072666b0c461164a87b241edcc52a7315d816154e503d3be81884\": not found" Apr 14 13:28:44.121947 kubelet[2702]: I0414 13:28:44.121403 2702 scope.go:117] "RemoveContainer" containerID="424489f34fdce4b7743e40c745ea1195c9a2c8bd6fc0ec17242fed31716fae7e" Apr 14 13:28:44.960640 containerd[1579]: time="2026-04-14T13:28:44.960448206Z" level=info msg="RemoveContainer for \"6ae92caa6a7a43bc732a7b205b970dd5e5a4d20947c6cee2d194744f4879c8cc\"" Apr 14 13:28:45.269835 kubelet[2702]: E0414 13:28:45.259623 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:28:45.324975 containerd[1579]: time="2026-04-14T13:28:45.323455448Z" level=info msg="RemoveContainer for \"6ae92caa6a7a43bc732a7b205b970dd5e5a4d20947c6cee2d194744f4879c8cc\" returns successfully" Apr 14 13:28:45.506111 containerd[1579]: time="2026-04-14T13:28:45.504024974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 13:28:45.506111 containerd[1579]: time="2026-04-14T13:28:45.504131243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 13:28:45.506111 containerd[1579]: time="2026-04-14T13:28:45.504148730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:28:45.506111 containerd[1579]: time="2026-04-14T13:28:45.504385967Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 13:28:45.830989 kubelet[2702]: I0414 13:28:45.500939 2702 scope.go:117] "RemoveContainer" containerID="424489f34fdce4b7743e40c745ea1195c9a2c8bd6fc0ec17242fed31716fae7e" Apr 14 13:28:46.222331 containerd[1579]: time="2026-04-14T13:28:46.221961248Z" level=info msg="RemoveContainer for \"424489f34fdce4b7743e40c745ea1195c9a2c8bd6fc0ec17242fed31716fae7e\"" Apr 14 13:28:46.343830 containerd[1579]: time="2026-04-14T13:28:46.343604232Z" level=info msg="RemoveContainer for \"424489f34fdce4b7743e40c745ea1195c9a2c8bd6fc0ec17242fed31716fae7e\" returns successfully" Apr 14 13:28:46.496709 kubelet[2702]: I0414 13:28:46.494271 2702 scope.go:117] "RemoveContainer" containerID="6ae92caa6a7a43bc732a7b205b970dd5e5a4d20947c6cee2d194744f4879c8cc" Apr 14 13:28:46.503556 kubelet[2702]: I0414 13:28:46.502778 2702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29979523-b9ef-4e95-ada7-13b2d8b91c40" path="/var/lib/kubelet/pods/29979523-b9ef-4e95-ada7-13b2d8b91c40/volumes" Apr 14 13:28:46.507041 containerd[1579]: time="2026-04-14T13:28:46.506916609Z" level=info msg="RemoveContainer for \"424489f34fdce4b7743e40c745ea1195c9a2c8bd6fc0ec17242fed31716fae7e\"" Apr 14 13:28:46.507041 containerd[1579]: time="2026-04-14T13:28:46.507024730Z" level=info msg="RemoveContainer for \"424489f34fdce4b7743e40c745ea1195c9a2c8bd6fc0ec17242fed31716fae7e\" returns successfully" Apr 14 13:28:46.507292 containerd[1579]: time="2026-04-14T13:28:46.507173487Z" level=error msg="ContainerStatus for \"6ae92caa6a7a43bc732a7b205b970dd5e5a4d20947c6cee2d194744f4879c8cc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ae92caa6a7a43bc732a7b205b970dd5e5a4d20947c6cee2d194744f4879c8cc\": not found" Apr 14 13:28:46.508770 kubelet[2702]: E0414 13:28:46.508628 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ae92caa6a7a43bc732a7b205b970dd5e5a4d20947c6cee2d194744f4879c8cc\": not found" containerID="6ae92caa6a7a43bc732a7b205b970dd5e5a4d20947c6cee2d194744f4879c8cc" Apr 14 13:28:46.509071 kubelet[2702]: I0414 13:28:46.508888 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ae92caa6a7a43bc732a7b205b970dd5e5a4d20947c6cee2d194744f4879c8cc"} err="failed to get container status \"6ae92caa6a7a43bc732a7b205b970dd5e5a4d20947c6cee2d194744f4879c8cc\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ae92caa6a7a43bc732a7b205b970dd5e5a4d20947c6cee2d194744f4879c8cc\": not found" Apr 14 13:28:46.509509 kubelet[2702]: E0414 13:28:46.509485 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.715s" Apr 14 13:28:46.775687 containerd[1579]: time="2026-04-14T13:28:46.772803363Z" level=info msg="StopPodSandbox for \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\"" Apr 14 13:28:46.775687 containerd[1579]: time="2026-04-14T13:28:46.773333896Z" level=error msg="StopPodSandbox for \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\" failed" error="failed to destroy network for sandbox \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\": cni plugin not initialized" Apr 14 13:28:46.789067 kubelet[2702]: E0414 13:28:46.788406 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:28:46.790207 kubelet[2702]: E0414 13:28:46.789257 2702 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\": cni plugin not initialized" podSandboxID="f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70" Apr 14 13:28:46.790207 kubelet[2702]: E0414 13:28:46.789566 2702 kuberuntime_gc.go:180] "Failed to stop sandbox before removing" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70\": cni plugin not initialized" sandboxID="f1cd519e32a13d6b9fa9951af862a4e1d5c24973a31447017fe1d19ff5a56a70" Apr 14 13:28:46.849725 kubelet[2702]: E0414 13:28:46.790513 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:28:46.924118 containerd[1579]: time="2026-04-14T13:28:46.923903083Z" level=info msg="StopPodSandbox for \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\"" Apr 14 13:28:46.925040 containerd[1579]: time="2026-04-14T13:28:46.924204458Z" level=info msg="TearDown network for sandbox \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\" successfully" Apr 14 13:28:46.925040 containerd[1579]: time="2026-04-14T13:28:46.924233053Z" level=info msg="StopPodSandbox for \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\" returns successfully" Apr 14 13:28:47.008628 containerd[1579]: time="2026-04-14T13:28:47.008156918Z" level=info msg="RemovePodSandbox for \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\"" Apr 14 13:28:47.013151 containerd[1579]: time="2026-04-14T13:28:47.012921493Z" level=info msg="Forcibly stopping sandbox \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\"" Apr 14 13:28:47.014988 containerd[1579]: time="2026-04-14T13:28:47.013452974Z" level=info msg="TearDown network for sandbox \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\" successfully" Apr 14 13:28:47.064174 containerd[1579]: time="2026-04-14T13:28:47.062789086Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 13:28:47.064174 containerd[1579]: time="2026-04-14T13:28:47.063771972Z" level=info msg="RemovePodSandbox \"02621f84f502530d7c8ba2ce02a5ff17761a0e6033ea45dd22cf9b184bfbae90\" returns successfully" Apr 14 13:28:47.224708 containerd[1579]: time="2026-04-14T13:28:47.224155440Z" level=info msg="StopPodSandbox for \"b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9\"" Apr 14 13:28:47.224708 containerd[1579]: time="2026-04-14T13:28:47.224440768Z" level=info msg="TearDown network for sandbox \"b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9\" successfully" Apr 14 13:28:47.224708 containerd[1579]: time="2026-04-14T13:28:47.224459352Z" level=info msg="StopPodSandbox for \"b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9\" returns successfully" Apr 14 13:28:47.342111 containerd[1579]: time="2026-04-14T13:28:47.334005304Z" level=info msg="RemovePodSandbox for \"b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9\"" Apr 14 13:28:47.342111 containerd[1579]: time="2026-04-14T13:28:47.334506030Z" level=info msg="Forcibly stopping sandbox \"b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9\"" Apr 14 13:28:47.342111 containerd[1579]: time="2026-04-14T13:28:47.340405331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wltbk,Uid:a2d8fcdb-1f86-4fba-ad54-71dd33b85848,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4617a62ab1336c0e7f50062ea6f938a0cd0a651d9966aec7a7292823ad060ab\"" Apr 14 13:28:47.381280 containerd[1579]: time="2026-04-14T13:28:47.348370725Z" level=info msg="TearDown network for sandbox \"b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9\" successfully" Apr 14 13:28:47.475783 containerd[1579]: time="2026-04-14T13:28:47.473130008Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 13:28:47.523943 containerd[1579]: time="2026-04-14T13:28:47.518230129Z" level=info msg="RemovePodSandbox \"b3a22e2b40abf286002748a55a8bdb9280f77f0540f01003bac356a8cb4d85b9\" returns successfully" Apr 14 13:28:47.530861 kubelet[2702]: E0414 13:28:47.520393 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:28:47.534051 kubelet[2702]: E0414 13:28:47.533310 2702 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:28:47.546758 containerd[1579]: time="2026-04-14T13:28:47.546353134Z" level=info msg="StopPodSandbox for \"103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5\"" Apr 14 13:28:47.570425 containerd[1579]: time="2026-04-14T13:28:47.570030660Z" level=error msg="StopPodSandbox for \"103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5\" failed" error="failed to destroy network for sandbox \"103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5\": cni plugin not initialized" Apr 14 13:28:47.766601 kubelet[2702]: E0414 13:28:47.735485 2702 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5\": cni plugin not initialized" podSandboxID="103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5" Apr 14 13:28:47.790606 kubelet[2702]: E0414 13:28:47.781807 2702 kuberuntime_gc.go:180] "Failed to stop sandbox before removing" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5\": cni plugin not initialized" sandboxID="103cde12b79ee298a4e71cd420e57c1c22add9dcb550dcc81a70d090e1a6adb5" Apr 14 13:28:48.139879 containerd[1579]: time="2026-04-14T13:28:48.138713440Z" level=info msg="CreateContainer within sandbox \"d4617a62ab1336c0e7f50062ea6f938a0cd0a651d9966aec7a7292823ad060ab\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Apr 14 13:28:49.640077 containerd[1579]: time="2026-04-14T13:28:49.639878725Z" level=info msg="CreateContainer within sandbox \"d4617a62ab1336c0e7f50062ea6f938a0cd0a651d9966aec7a7292823ad060ab\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"78933e5c48174f29ed1ab1620fc315edebc2f111b6f690f59387d2fe391e54cb\"" Apr 14 13:28:49.817704 containerd[1579]: time="2026-04-14T13:28:49.768889984Z" level=info msg="StartContainer for \"78933e5c48174f29ed1ab1620fc315edebc2f111b6f690f59387d2fe391e54cb\"" Apr 14 13:28:50.335728 kubelet[2702]: E0414 13:28:50.333503 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:28:50.392310 kubelet[2702]: E0414 13:28:50.390031 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.409s" Apr 14 13:28:50.750753 kubelet[2702]: E0414 13:28:50.749128 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:28:51.863959 containerd[1579]: time="2026-04-14T13:28:51.863659661Z" level=info msg="StartContainer for \"78933e5c48174f29ed1ab1620fc315edebc2f111b6f690f59387d2fe391e54cb\" returns successfully" Apr 14 13:28:52.395144 kubelet[2702]: E0414 13:28:52.394811 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:28:52.679919 kubelet[2702]: E0414 13:28:52.670203 2702 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:28:53.730460 kubelet[2702]: E0414 13:28:53.730306 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.712s" Apr 14 13:28:53.731829 kubelet[2702]: E0414 13:28:53.730943 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:28:54.414942 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-78933e5c48174f29ed1ab1620fc315edebc2f111b6f690f59387d2fe391e54cb-rootfs.mount: Deactivated successfully. Apr 14 13:28:54.757853 containerd[1579]: time="2026-04-14T13:28:54.755804584Z" level=info msg="shim disconnected" id=78933e5c48174f29ed1ab1620fc315edebc2f111b6f690f59387d2fe391e54cb namespace=k8s.io Apr 14 13:28:54.757853 containerd[1579]: time="2026-04-14T13:28:54.756374891Z" level=warning msg="cleaning up after shim disconnected" id=78933e5c48174f29ed1ab1620fc315edebc2f111b6f690f59387d2fe391e54cb namespace=k8s.io Apr 14 13:28:54.757853 containerd[1579]: time="2026-04-14T13:28:54.756385404Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:28:54.950506 kubelet[2702]: E0414 13:28:54.948824 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:28:54.971322 kubelet[2702]: E0414 13:28:54.971109 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:28:55.482387 kubelet[2702]: E0414 13:28:55.476306 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.482s" Apr 14 13:28:56.325749 kubelet[2702]: E0414 13:28:56.250297 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:28:56.325749 kubelet[2702]: E0414 13:28:56.319904 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:28:57.708063 kubelet[2702]: E0414 13:28:57.706900 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:28:58.445110 kubelet[2702]: E0414 13:28:58.441504 2702 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:29:01.109484 kubelet[2702]: E0414 13:29:01.109151 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.062s" Apr 14 13:29:01.290700 kubelet[2702]: E0414 13:29:01.126074 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:29:01.396127 containerd[1579]: time="2026-04-14T13:29:01.361127332Z" level=info msg="CreateContainer within sandbox \"d4617a62ab1336c0e7f50062ea6f938a0cd0a651d9966aec7a7292823ad060ab\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Apr 14 13:29:02.142587 containerd[1579]: time="2026-04-14T13:29:02.132741568Z" level=info msg="CreateContainer within sandbox \"d4617a62ab1336c0e7f50062ea6f938a0cd0a651d9966aec7a7292823ad060ab\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"63d24bf6df200db2848b0c621900803087f262e5a474ee3493db507a9ce0de69\"" Apr 14 13:29:02.392150 containerd[1579]: time="2026-04-14T13:29:02.388689504Z" level=info msg="StartContainer for \"63d24bf6df200db2848b0c621900803087f262e5a474ee3493db507a9ce0de69\"" Apr 14 13:29:03.457927 kubelet[2702]: E0414 13:29:03.456006 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.346s" Apr 14 13:29:04.561191 kubelet[2702]: E0414 13:29:04.557035 2702 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:29:05.787227 kubelet[2702]: E0414 13:29:05.785410 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:29:05.895727 kubelet[2702]: E0414 13:29:05.893307 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.419s" Apr 14 13:29:06.942886 containerd[1579]: time="2026-04-14T13:29:06.932316987Z" level=info msg="StartContainer for \"63d24bf6df200db2848b0c621900803087f262e5a474ee3493db507a9ce0de69\" returns successfully" Apr 14 13:29:07.067928 kubelet[2702]: E0414 13:29:07.063902 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:29:07.100979 kubelet[2702]: E0414 13:29:07.100775 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:29:08.165753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63d24bf6df200db2848b0c621900803087f262e5a474ee3493db507a9ce0de69-rootfs.mount: Deactivated successfully. Apr 14 13:29:08.426840 containerd[1579]: time="2026-04-14T13:29:08.359989062Z" level=info msg="shim disconnected" id=63d24bf6df200db2848b0c621900803087f262e5a474ee3493db507a9ce0de69 namespace=k8s.io Apr 14 13:29:08.427710 containerd[1579]: time="2026-04-14T13:29:08.427636910Z" level=warning msg="cleaning up after shim disconnected" id=63d24bf6df200db2848b0c621900803087f262e5a474ee3493db507a9ce0de69 namespace=k8s.io Apr 14 13:29:08.427768 containerd[1579]: time="2026-04-14T13:29:08.427759679Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:29:08.443952 kubelet[2702]: E0414 13:29:08.426785 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:29:09.736793 kubelet[2702]: E0414 13:29:09.722917 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.849s" Apr 14 13:29:09.947856 kubelet[2702]: E0414 13:29:09.941271 2702 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:29:10.599380 containerd[1579]: time="2026-04-14T13:29:10.593367623Z" level=warning msg="cleanup warnings time=\"2026-04-14T13:29:10Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 14 13:29:10.821753 kubelet[2702]: E0414 13:29:10.820319 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:29:10.935840 kubelet[2702]: E0414 13:29:10.860156 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.137s" Apr 14 13:29:10.946197 kubelet[2702]: E0414 13:29:10.861191 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:29:12.467492 kubelet[2702]: E0414 13:29:12.466892 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:29:12.533915 kubelet[2702]: E0414 13:29:12.533461 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:29:13.044123 containerd[1579]: time="2026-04-14T13:29:13.043783988Z" level=info msg="CreateContainer within sandbox \"d4617a62ab1336c0e7f50062ea6f938a0cd0a651d9966aec7a7292823ad060ab\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Apr 14 13:29:13.182215 kubelet[2702]: E0414 13:29:13.175931 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:29:13.351094 kubelet[2702]: E0414 13:29:13.348857 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:29:13.930180 containerd[1579]: time="2026-04-14T13:29:13.929437290Z" level=info msg="CreateContainer within sandbox \"d4617a62ab1336c0e7f50062ea6f938a0cd0a651d9966aec7a7292823ad060ab\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"4457ff3ee20504085b62248da61da9becec12dec76d55b33cc78267976e25ba0\"" Apr 14 13:29:14.368988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1687767791.mount: Deactivated successfully. Apr 14 13:29:14.826642 containerd[1579]: time="2026-04-14T13:29:14.825120897Z" level=info msg="StartContainer for \"4457ff3ee20504085b62248da61da9becec12dec76d55b33cc78267976e25ba0\"" Apr 14 13:29:15.212938 kubelet[2702]: E0414 13:29:15.209768 2702 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:29:15.443192 kubelet[2702]: E0414 13:29:15.442775 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.479s" Apr 14 13:29:15.475004 kubelet[2702]: E0414 13:29:15.463554 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:29:15.654864 kubelet[2702]: E0414 13:29:15.617514 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:29:16.983845 containerd[1579]: time="2026-04-14T13:29:16.983575477Z" level=info msg="StartContainer for \"4457ff3ee20504085b62248da61da9becec12dec76d55b33cc78267976e25ba0\" returns successfully" Apr 14 13:29:17.022953 kubelet[2702]: E0414 13:29:17.015260 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:29:17.040099 kubelet[2702]: E0414 13:29:17.039674 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:29:18.205163 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4457ff3ee20504085b62248da61da9becec12dec76d55b33cc78267976e25ba0-rootfs.mount: Deactivated successfully. Apr 14 13:29:18.424714 containerd[1579]: time="2026-04-14T13:29:18.423012661Z" level=info msg="shim disconnected" id=4457ff3ee20504085b62248da61da9becec12dec76d55b33cc78267976e25ba0 namespace=k8s.io Apr 14 13:29:18.424714 containerd[1579]: time="2026-04-14T13:29:18.423214920Z" level=warning msg="cleaning up after shim disconnected" id=4457ff3ee20504085b62248da61da9becec12dec76d55b33cc78267976e25ba0 namespace=k8s.io Apr 14 13:29:18.424714 containerd[1579]: time="2026-04-14T13:29:18.423227700Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:29:19.130441 kubelet[2702]: E0414 13:29:19.120938 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:29:19.246941 kubelet[2702]: E0414 13:29:19.242614 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:29:19.621087 kubelet[2702]: E0414 13:29:19.620884 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:29:21.307643 kubelet[2702]: E0414 13:29:21.306941 2702 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:29:21.331753 kubelet[2702]: E0414 13:29:21.331383 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.386s" Apr 14 13:29:21.848296 kubelet[2702]: E0414 13:29:21.847212 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:29:22.012341 kubelet[2702]: E0414 13:29:21.997091 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:29:22.038082 kubelet[2702]: E0414 13:29:22.036260 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:29:24.227529 kubelet[2702]: E0414 13:29:24.223372 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:29:24.343800 containerd[1579]: time="2026-04-14T13:29:24.273165030Z" level=info msg="CreateContainer within sandbox \"d4617a62ab1336c0e7f50062ea6f938a0cd0a651d9966aec7a7292823ad060ab\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Apr 14 13:29:24.812123 kubelet[2702]: E0414 13:29:24.808508 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:29:25.232231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1650037624.mount: Deactivated successfully. Apr 14 13:29:25.433887 containerd[1579]: time="2026-04-14T13:29:25.429214852Z" level=info msg="CreateContainer within sandbox \"d4617a62ab1336c0e7f50062ea6f938a0cd0a651d9966aec7a7292823ad060ab\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c112f36cf7924a51a7563a47965644d5873e6aec727694ae95da135965a8aed4\"" Apr 14 13:29:25.631715 containerd[1579]: time="2026-04-14T13:29:25.596743167Z" level=info msg="StartContainer for \"c112f36cf7924a51a7563a47965644d5873e6aec727694ae95da135965a8aed4\"" Apr 14 13:29:26.320206 kubelet[2702]: E0414 13:29:26.319932 2702 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:29:26.645463 kubelet[2702]: E0414 13:29:26.644206 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:29:26.645463 kubelet[2702]: E0414 13:29:26.645358 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:29:29.042575 containerd[1579]: time="2026-04-14T13:29:29.042347625Z" level=info msg="StartContainer for \"c112f36cf7924a51a7563a47965644d5873e6aec727694ae95da135965a8aed4\" returns successfully" Apr 14 13:29:29.244509 kubelet[2702]: E0414 13:29:29.243231 2702 kubelet.go:2627] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.291s" Apr 14 13:29:29.848665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c112f36cf7924a51a7563a47965644d5873e6aec727694ae95da135965a8aed4-rootfs.mount: Deactivated successfully. Apr 14 13:29:29.851512 containerd[1579]: time="2026-04-14T13:29:29.850426756Z" level=info msg="shim disconnected" id=c112f36cf7924a51a7563a47965644d5873e6aec727694ae95da135965a8aed4 namespace=k8s.io Apr 14 13:29:29.851512 containerd[1579]: time="2026-04-14T13:29:29.850525076Z" level=warning msg="cleaning up after shim disconnected" id=c112f36cf7924a51a7563a47965644d5873e6aec727694ae95da135965a8aed4 namespace=k8s.io Apr 14 13:29:29.851512 containerd[1579]: time="2026-04-14T13:29:29.850560640Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 13:29:29.972585 kubelet[2702]: E0414 13:29:29.972314 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:29:30.041625 kubelet[2702]: E0414 13:29:30.041317 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:29:30.052076 kubelet[2702]: E0414 13:29:30.048591 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:29:30.383492 containerd[1579]: time="2026-04-14T13:29:30.372978715Z" level=warning msg="cleanup warnings time=\"2026-04-14T13:29:30Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 14 13:29:31.423695 kubelet[2702]: E0414 13:29:31.418691 2702 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:29:31.446596 kubelet[2702]: E0414 13:29:31.446500 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:29:31.850898 containerd[1579]: time="2026-04-14T13:29:31.848878537Z" level=info msg="CreateContainer within sandbox \"d4617a62ab1336c0e7f50062ea6f938a0cd0a651d9966aec7a7292823ad060ab\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Apr 14 13:29:32.092607 kubelet[2702]: E0414 13:29:32.092413 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:29:32.092607 kubelet[2702]: E0414 13:29:32.092495 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:29:32.570527 containerd[1579]: time="2026-04-14T13:29:32.568233655Z" level=info msg="CreateContainer within sandbox \"d4617a62ab1336c0e7f50062ea6f938a0cd0a651d9966aec7a7292823ad060ab\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"75596c1180cddb9701b983031052ab3feb897d58b57ee6d0b0409a53a42f58cf\"" Apr 14 13:29:32.656816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4200740569.mount: Deactivated successfully. Apr 14 13:29:32.674421 containerd[1579]: time="2026-04-14T13:29:32.672503046Z" level=info msg="StartContainer for \"75596c1180cddb9701b983031052ab3feb897d58b57ee6d0b0409a53a42f58cf\"" Apr 14 13:29:33.863885 containerd[1579]: time="2026-04-14T13:29:33.863013076Z" level=info msg="StartContainer for \"75596c1180cddb9701b983031052ab3feb897d58b57ee6d0b0409a53a42f58cf\" returns successfully" Apr 14 13:29:34.044053 kubelet[2702]: E0414 13:29:34.042118 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:29:34.048746 kubelet[2702]: E0414 13:29:34.046764 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:29:36.200763 kubelet[2702]: E0414 13:29:36.196693 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:29:36.200763 kubelet[2702]: E0414 13:29:36.196967 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:29:36.430252 kubelet[2702]: E0414 13:29:36.430009 2702 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 14 13:29:37.425641 kubelet[2702]: E0414 13:29:37.425289 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:29:38.127724 kubelet[2702]: E0414 13:29:38.126562 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:29:38.127724 kubelet[2702]: E0414 13:29:38.126970 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256" Apr 14 13:29:38.128599 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Apr 14 13:29:38.441743 kubelet[2702]: E0414 13:29:38.409122 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 13:29:39.141167 sshd[5435]: pam_unix(sshd:session): session closed for user core Apr 14 13:29:39.152627 systemd[1]: sshd@30-10.0.0.13:22-10.0.0.1:53578.service: Deactivated successfully. Apr 14 13:29:39.155587 systemd[1]: session-31.scope: Deactivated successfully. Apr 14 13:29:39.299038 systemd-logind[1561]: Session 31 logged out. Waiting for processes to exit. Apr 14 13:29:39.302450 systemd-logind[1561]: Removed session 31. Apr 14 13:29:39.943167 kubelet[2702]: E0414 13:29:39.942794 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-jc82b" podUID="35ec3304-c011-45a9-8315-a39d73673d17" Apr 14 13:29:39.943167 kubelet[2702]: E0414 13:29:39.942960 2702 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-674b8bbfcf-hld84" podUID="67191550-1737-456e-a429-02a63e9c7256"