Apr 16 03:45:35.234847 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Wed Apr 15 22:39:17 -00 2026 Apr 16 03:45:35.234913 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=15b40c09f238fba45b5bb3e18ef7e289d4e557e0500075f5731dd7eaa53962ae Apr 16 03:45:35.234929 kernel: BIOS-provided physical RAM map: Apr 16 03:45:35.234937 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 16 03:45:35.234946 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 16 03:45:35.234954 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 16 03:45:35.234964 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 16 03:45:35.234973 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 16 03:45:35.234995 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 16 03:45:35.235004 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 16 03:45:35.235013 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 16 03:45:35.235024 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 16 03:45:35.235033 kernel: NX (Execute Disable) protection: active Apr 16 03:45:35.235074 kernel: APIC: Static calls initialized Apr 16 03:45:35.235084 kernel: SMBIOS 2.8 present. Apr 16 03:45:35.235093 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 16 03:45:35.235118 kernel: DMI: Memory slots populated: 1/1 Apr 16 03:45:35.235127 kernel: Hypervisor detected: KVM Apr 16 03:45:35.235136 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 16 03:45:35.235145 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 16 03:45:35.235154 kernel: kvm-clock: using sched offset of 12689731046 cycles Apr 16 03:45:35.235164 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 16 03:45:35.235174 kernel: tsc: Detected 2793.438 MHz processor Apr 16 03:45:35.235184 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 16 03:45:35.235194 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 16 03:45:35.235203 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 16 03:45:35.235216 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 16 03:45:35.235225 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 16 03:45:35.235235 kernel: Using GB pages for direct mapping Apr 16 03:45:35.235244 kernel: ACPI: Early table checksum verification disabled Apr 16 03:45:35.235253 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 16 03:45:35.235263 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 03:45:35.235272 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 03:45:35.235282 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 03:45:35.235291 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 16 03:45:35.235304 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 03:45:35.235313 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 03:45:35.235321 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 03:45:35.235331 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 16 03:45:35.235341 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 16 03:45:35.235355 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 16 03:45:35.235366 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 16 03:45:35.235376 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 16 03:45:35.235387 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 16 03:45:35.235397 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 16 03:45:35.235407 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 16 03:45:35.235417 kernel: No NUMA configuration found Apr 16 03:45:35.235426 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 16 03:45:35.235437 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Apr 16 03:45:35.235449 kernel: Zone ranges: Apr 16 03:45:35.235460 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 16 03:45:35.235470 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 16 03:45:35.235479 kernel: Normal empty Apr 16 03:45:35.235514 kernel: Device empty Apr 16 03:45:35.235524 kernel: Movable zone start for each node Apr 16 03:45:35.235534 kernel: Early memory node ranges Apr 16 03:45:35.235544 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 16 03:45:35.235554 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 16 03:45:35.235568 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 16 03:45:35.235578 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 16 03:45:35.235587 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 16 03:45:35.235596 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 16 03:45:35.237306 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 16 03:45:35.237348 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 16 03:45:35.237361 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 16 03:45:35.237372 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 16 03:45:35.237381 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 16 03:45:35.237430 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 16 03:45:35.237441 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 16 03:45:35.237451 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 16 03:45:35.237462 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 16 03:45:35.237472 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 16 03:45:35.237482 kernel: TSC deadline timer available Apr 16 03:45:35.237511 kernel: CPU topo: Max. logical packages: 1 Apr 16 03:45:35.237520 kernel: CPU topo: Max. logical dies: 1 Apr 16 03:45:35.237529 kernel: CPU topo: Max. dies per package: 1 Apr 16 03:45:35.237543 kernel: CPU topo: Max. threads per core: 1 Apr 16 03:45:35.237552 kernel: CPU topo: Num. cores per package: 4 Apr 16 03:45:35.237561 kernel: CPU topo: Num. threads per package: 4 Apr 16 03:45:35.237571 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 16 03:45:35.237580 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 16 03:45:35.237590 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 16 03:45:35.237600 kernel: kvm-guest: setup PV sched yield Apr 16 03:45:35.237610 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 16 03:45:35.237620 kernel: Booting paravirtualized kernel on KVM Apr 16 03:45:35.237631 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 16 03:45:35.237644 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 16 03:45:35.237654 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 16 03:45:35.237664 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 16 03:45:35.237674 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 16 03:45:35.237684 kernel: kvm-guest: PV spinlocks enabled Apr 16 03:45:35.237694 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 16 03:45:35.237707 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=15b40c09f238fba45b5bb3e18ef7e289d4e557e0500075f5731dd7eaa53962ae Apr 16 03:45:35.237718 kernel: random: crng init done Apr 16 03:45:35.237731 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 16 03:45:35.237741 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 16 03:45:35.237752 kernel: Fallback order for Node 0: 0 Apr 16 03:45:35.237761 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Apr 16 03:45:35.237771 kernel: Policy zone: DMA32 Apr 16 03:45:35.237781 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 16 03:45:35.237792 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 16 03:45:35.237801 kernel: ftrace: allocating 40126 entries in 157 pages Apr 16 03:45:35.237810 kernel: ftrace: allocated 157 pages with 5 groups Apr 16 03:45:35.237823 kernel: Dynamic Preempt: voluntary Apr 16 03:45:35.237834 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 16 03:45:35.237846 kernel: rcu: RCU event tracing is enabled. Apr 16 03:45:35.237856 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 16 03:45:35.237865 kernel: Trampoline variant of Tasks RCU enabled. Apr 16 03:45:35.237876 kernel: Rude variant of Tasks RCU enabled. Apr 16 03:45:35.238956 kernel: Tracing variant of Tasks RCU enabled. Apr 16 03:45:35.239162 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 16 03:45:35.239174 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 16 03:45:35.239222 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 03:45:35.239232 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 03:45:35.239242 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 16 03:45:35.239252 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 16 03:45:35.239262 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 16 03:45:35.239272 kernel: Console: colour VGA+ 80x25 Apr 16 03:45:35.239294 kernel: printk: legacy console [ttyS0] enabled Apr 16 03:45:35.239304 kernel: ACPI: Core revision 20240827 Apr 16 03:45:35.239315 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 16 03:45:35.239325 kernel: APIC: Switch to symmetric I/O mode setup Apr 16 03:45:35.239336 kernel: x2apic enabled Apr 16 03:45:35.239347 kernel: APIC: Switched APIC routing to: physical x2apic Apr 16 03:45:35.239360 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 16 03:45:35.239393 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 16 03:45:35.239405 kernel: kvm-guest: setup PV IPIs Apr 16 03:45:35.239415 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 16 03:45:35.239429 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 03:45:35.239441 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 16 03:45:35.239452 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 16 03:45:35.239463 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 16 03:45:35.239474 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 16 03:45:35.239485 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 16 03:45:35.239515 kernel: Spectre V2 : Mitigation: Retpolines Apr 16 03:45:35.239525 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 16 03:45:35.239535 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 16 03:45:35.239549 kernel: RETBleed: Vulnerable Apr 16 03:45:35.239559 kernel: Speculative Store Bypass: Vulnerable Apr 16 03:45:35.239569 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 16 03:45:35.239579 kernel: GDS: Unknown: Dependent on hypervisor status Apr 16 03:45:35.239590 kernel: active return thunk: its_return_thunk Apr 16 03:45:35.239600 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 16 03:45:35.239611 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 16 03:45:35.239622 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 16 03:45:35.239633 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 16 03:45:35.239647 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 16 03:45:35.239659 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 16 03:45:35.239669 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 16 03:45:35.239680 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 16 03:45:35.239690 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 16 03:45:35.239700 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 16 03:45:35.239711 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 16 03:45:35.239722 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 16 03:45:35.239732 kernel: Freeing SMP alternatives memory: 32K Apr 16 03:45:35.239766 kernel: pid_max: default: 32768 minimum: 301 Apr 16 03:45:35.239777 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 16 03:45:35.239788 kernel: landlock: Up and running. Apr 16 03:45:35.239799 kernel: SELinux: Initializing. Apr 16 03:45:35.239810 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 03:45:35.239821 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 16 03:45:35.239832 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 16 03:45:35.239843 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 16 03:45:35.239854 kernel: signal: max sigframe size: 3632 Apr 16 03:45:35.239867 kernel: rcu: Hierarchical SRCU implementation. Apr 16 03:45:35.239877 kernel: rcu: Max phase no-delay instances is 400. Apr 16 03:45:35.239888 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 16 03:45:35.239898 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 16 03:45:35.239908 kernel: smp: Bringing up secondary CPUs ... Apr 16 03:45:35.239918 kernel: smpboot: x86: Booting SMP configuration: Apr 16 03:45:35.239929 kernel: .... node #0, CPUs: #1 #2 #3 Apr 16 03:45:35.239940 kernel: smp: Brought up 1 node, 4 CPUs Apr 16 03:45:35.239950 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 16 03:45:35.239965 kernel: Memory: 2419756K/2571752K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46224K init, 2524K bss, 146108K reserved, 0K cma-reserved) Apr 16 03:45:35.240161 kernel: devtmpfs: initialized Apr 16 03:45:35.240179 kernel: x86/mm: Memory block size: 128MB Apr 16 03:45:35.240191 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 16 03:45:35.240202 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 16 03:45:35.240213 kernel: pinctrl core: initialized pinctrl subsystem Apr 16 03:45:35.240224 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 16 03:45:35.240234 kernel: audit: initializing netlink subsys (disabled) Apr 16 03:45:35.240246 kernel: audit: type=2000 audit(1776311124.902:1): state=initialized audit_enabled=0 res=1 Apr 16 03:45:35.240259 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 16 03:45:35.240268 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 16 03:45:35.240278 kernel: cpuidle: using governor menu Apr 16 03:45:35.240288 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 16 03:45:35.240297 kernel: dca service started, version 1.12.1 Apr 16 03:45:35.240308 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Apr 16 03:45:35.240319 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 16 03:45:35.240329 kernel: PCI: Using configuration type 1 for base access Apr 16 03:45:35.240340 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 16 03:45:35.240354 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 16 03:45:35.240365 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 16 03:45:35.240376 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 16 03:45:35.240386 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 16 03:45:35.240397 kernel: ACPI: Added _OSI(Module Device) Apr 16 03:45:35.240408 kernel: ACPI: Added _OSI(Processor Device) Apr 16 03:45:35.240419 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 16 03:45:35.240429 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 16 03:45:35.240440 kernel: ACPI: Interpreter enabled Apr 16 03:45:35.240453 kernel: ACPI: PM: (supports S0 S3 S5) Apr 16 03:45:35.240464 kernel: ACPI: Using IOAPIC for interrupt routing Apr 16 03:45:35.240476 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 16 03:45:35.240486 kernel: PCI: Using E820 reservations for host bridge windows Apr 16 03:45:35.240525 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 16 03:45:35.240536 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 16 03:45:35.240931 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 16 03:45:35.241096 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 16 03:45:35.241212 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 16 03:45:35.241227 kernel: PCI host bridge to bus 0000:00 Apr 16 03:45:35.241457 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 16 03:45:35.241591 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 16 03:45:35.241687 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 16 03:45:35.241776 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 16 03:45:35.241867 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 16 03:45:35.241953 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 16 03:45:35.243157 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 16 03:45:35.243565 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 16 03:45:35.243734 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 16 03:45:35.243840 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Apr 16 03:45:35.243940 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Apr 16 03:45:35.244099 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Apr 16 03:45:35.244209 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 16 03:45:35.244371 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 16 03:45:35.244478 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Apr 16 03:45:35.244628 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Apr 16 03:45:35.244815 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Apr 16 03:45:35.244989 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 16 03:45:35.245205 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Apr 16 03:45:35.245367 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Apr 16 03:45:35.245471 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Apr 16 03:45:35.245672 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 16 03:45:35.245786 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Apr 16 03:45:35.245889 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Apr 16 03:45:35.245994 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 16 03:45:35.246144 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Apr 16 03:45:35.246279 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 16 03:45:35.246386 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 16 03:45:35.246534 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 16 03:45:35.247812 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Apr 16 03:45:35.247931 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Apr 16 03:45:35.249626 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 16 03:45:35.249754 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Apr 16 03:45:35.249772 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 16 03:45:35.249783 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 16 03:45:35.249795 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 16 03:45:35.249805 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 16 03:45:35.249817 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 16 03:45:35.249853 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 16 03:45:35.249872 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 16 03:45:35.249883 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 16 03:45:35.249894 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 16 03:45:35.249905 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 16 03:45:35.249917 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 16 03:45:35.249927 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 16 03:45:35.249937 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 16 03:45:35.249948 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 16 03:45:35.249959 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 16 03:45:35.249973 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 16 03:45:35.249984 kernel: iommu: Default domain type: Translated Apr 16 03:45:35.249995 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 16 03:45:35.250007 kernel: PCI: Using ACPI for IRQ routing Apr 16 03:45:35.250017 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 16 03:45:35.250028 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 16 03:45:35.251178 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 16 03:45:35.251380 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 16 03:45:35.251519 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 16 03:45:35.251639 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 16 03:45:35.251655 kernel: vgaarb: loaded Apr 16 03:45:35.251666 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 16 03:45:35.251677 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 16 03:45:35.251688 kernel: clocksource: Switched to clocksource kvm-clock Apr 16 03:45:35.251699 kernel: VFS: Disk quotas dquot_6.6.0 Apr 16 03:45:35.251711 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 16 03:45:35.251721 kernel: pnp: PnP ACPI init Apr 16 03:45:35.251950 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 16 03:45:35.251977 kernel: pnp: PnP ACPI: found 6 devices Apr 16 03:45:35.251987 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 16 03:45:35.251998 kernel: NET: Registered PF_INET protocol family Apr 16 03:45:35.252008 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 16 03:45:35.252018 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 16 03:45:35.252029 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 16 03:45:35.252092 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 16 03:45:35.252103 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 16 03:45:35.252118 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 16 03:45:35.252128 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 03:45:35.252139 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 16 03:45:35.252150 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 16 03:45:35.252158 kernel: NET: Registered PF_XDP protocol family Apr 16 03:45:35.252257 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 16 03:45:35.252336 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 16 03:45:35.252430 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 16 03:45:35.254148 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 16 03:45:35.254261 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 16 03:45:35.254351 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 16 03:45:35.254365 kernel: PCI: CLS 0 bytes, default 64 Apr 16 03:45:35.254375 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 16 03:45:35.254387 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 16 03:45:35.254398 kernel: Initialise system trusted keyrings Apr 16 03:45:35.254410 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 16 03:45:35.254421 kernel: Key type asymmetric registered Apr 16 03:45:35.254439 kernel: Asymmetric key parser 'x509' registered Apr 16 03:45:35.254449 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 16 03:45:35.254460 kernel: io scheduler mq-deadline registered Apr 16 03:45:35.254472 kernel: io scheduler kyber registered Apr 16 03:45:35.254482 kernel: io scheduler bfq registered Apr 16 03:45:35.254522 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 16 03:45:35.254535 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 16 03:45:35.254545 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 16 03:45:35.254555 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 16 03:45:35.254569 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 16 03:45:35.254579 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 16 03:45:35.254589 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 16 03:45:35.254600 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 16 03:45:35.254611 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 16 03:45:35.254832 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 16 03:45:35.254855 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 16 03:45:35.254978 kernel: rtc_cmos 00:04: registered as rtc0 Apr 16 03:45:35.255131 kernel: rtc_cmos 00:04: setting system clock to 2026-04-16T03:45:33 UTC (1776311133) Apr 16 03:45:35.255224 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 16 03:45:35.255239 kernel: intel_pstate: CPU model not supported Apr 16 03:45:35.255251 kernel: NET: Registered PF_INET6 protocol family Apr 16 03:45:35.255262 kernel: Segment Routing with IPv6 Apr 16 03:45:35.255274 kernel: In-situ OAM (IOAM) with IPv6 Apr 16 03:45:35.255284 kernel: NET: Registered PF_PACKET protocol family Apr 16 03:45:35.255295 kernel: Key type dns_resolver registered Apr 16 03:45:35.255305 kernel: IPI shorthand broadcast: enabled Apr 16 03:45:35.255320 kernel: sched_clock: Marking stable (8645019278, 452447175)->(9653598391, -556131938) Apr 16 03:45:35.255331 kernel: registered taskstats version 1 Apr 16 03:45:35.255343 kernel: Loading compiled-in X.509 certificates Apr 16 03:45:35.255380 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: 25c2b596b475a2918f2ba6f953b0a89c09a0d0ab' Apr 16 03:45:35.255391 kernel: Demotion targets for Node 0: null Apr 16 03:45:35.255402 kernel: Key type .fscrypt registered Apr 16 03:45:35.255413 kernel: Key type fscrypt-provisioning registered Apr 16 03:45:35.255424 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 16 03:45:35.255435 kernel: ima: Allocated hash algorithm: sha1 Apr 16 03:45:35.255449 kernel: ima: No architecture policies found Apr 16 03:45:35.255460 kernel: clk: Disabling unused clocks Apr 16 03:45:35.255470 kernel: Warning: unable to open an initial console. Apr 16 03:45:35.255481 kernel: Freeing unused kernel image (initmem) memory: 46224K Apr 16 03:45:35.255514 kernel: Write protecting the kernel read-only data: 40960k Apr 16 03:45:35.255526 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 16 03:45:35.255536 kernel: Run /init as init process Apr 16 03:45:35.255546 kernel: with arguments: Apr 16 03:45:35.255557 kernel: /init Apr 16 03:45:35.255571 kernel: with environment: Apr 16 03:45:35.255581 kernel: HOME=/ Apr 16 03:45:35.255591 kernel: TERM=linux Apr 16 03:45:35.255603 systemd[1]: Successfully made /usr/ read-only. Apr 16 03:45:35.255619 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 03:45:35.255632 systemd[1]: Detected virtualization kvm. Apr 16 03:45:35.255643 systemd[1]: Detected architecture x86-64. Apr 16 03:45:35.255697 systemd[1]: Running in initrd. Apr 16 03:45:35.255710 systemd[1]: No hostname configured, using default hostname. Apr 16 03:45:35.255723 systemd[1]: Hostname set to . Apr 16 03:45:35.255733 systemd[1]: Initializing machine ID from VM UUID. Apr 16 03:45:35.255744 systemd[1]: Queued start job for default target initrd.target. Apr 16 03:45:35.255755 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 03:45:35.255770 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 03:45:35.255782 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 16 03:45:35.255794 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 03:45:35.255805 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 16 03:45:35.255817 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 16 03:45:35.255831 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 16 03:45:35.255843 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 16 03:45:35.255858 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 03:45:35.255870 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 03:45:35.255882 systemd[1]: Reached target paths.target - Path Units. Apr 16 03:45:35.255893 systemd[1]: Reached target slices.target - Slice Units. Apr 16 03:45:35.255905 systemd[1]: Reached target swap.target - Swaps. Apr 16 03:45:35.255916 systemd[1]: Reached target timers.target - Timer Units. Apr 16 03:45:35.255927 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 03:45:35.255939 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 03:45:35.255951 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 16 03:45:35.255965 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 16 03:45:35.255978 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 03:45:35.255989 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 03:45:35.256001 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 03:45:35.256013 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 03:45:35.256025 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 16 03:45:35.256088 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 03:45:35.256100 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 16 03:45:35.256112 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 16 03:45:35.256143 systemd[1]: Starting systemd-fsck-usr.service... Apr 16 03:45:35.256169 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 03:45:35.256181 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 03:45:35.256193 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 03:45:35.256209 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 16 03:45:35.256222 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 03:45:35.256235 systemd[1]: Finished systemd-fsck-usr.service. Apr 16 03:45:35.256284 systemd-journald[202]: Collecting audit messages is disabled. Apr 16 03:45:35.256317 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 16 03:45:35.256331 systemd-journald[202]: Journal started Apr 16 03:45:35.256359 systemd-journald[202]: Runtime Journal (/run/log/journal/510b361a9fe2404097164285bddd1d99) is 6M, max 48.2M, 42.2M free. Apr 16 03:45:35.246868 systemd-modules-load[205]: Inserted module 'overlay' Apr 16 03:45:35.266949 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 03:45:35.288068 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 03:45:35.297108 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 16 03:45:35.307601 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 03:45:35.655637 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 16 03:45:35.660157 kernel: Bridge firewalling registered Apr 16 03:45:35.659886 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 16 03:45:35.661629 systemd-modules-load[205]: Inserted module 'br_netfilter' Apr 16 03:45:35.664813 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 03:45:35.876188 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 03:45:35.876650 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 03:45:35.901301 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 03:45:35.964771 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 03:45:36.026853 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 16 03:45:36.049974 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 03:45:36.105091 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 03:45:36.233317 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 03:45:36.258235 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 16 03:45:36.436822 systemd-resolved[236]: Positive Trust Anchors: Apr 16 03:45:36.436839 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 03:45:36.456752 dracut-cmdline[247]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=15b40c09f238fba45b5bb3e18ef7e289d4e557e0500075f5731dd7eaa53962ae Apr 16 03:45:36.436864 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 03:45:36.454608 systemd-resolved[236]: Defaulting to hostname 'linux'. Apr 16 03:45:36.460035 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 03:45:36.495402 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 03:45:37.663198 kernel: SCSI subsystem initialized Apr 16 03:45:37.729280 kernel: Loading iSCSI transport class v2.0-870. Apr 16 03:45:37.822683 kernel: iscsi: registered transport (tcp) Apr 16 03:45:37.972872 kernel: iscsi: registered transport (qla4xxx) Apr 16 03:45:37.973273 kernel: QLogic iSCSI HBA Driver Apr 16 03:45:38.400832 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 03:45:38.468780 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 03:45:38.490988 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 03:45:38.743017 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 16 03:45:38.762878 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 16 03:45:39.263384 kernel: hrtimer: interrupt took 41907994 ns Apr 16 03:45:39.286122 kernel: raid6: avx512x4 gen() 18963 MB/s Apr 16 03:45:39.309198 kernel: raid6: avx512x2 gen() 21001 MB/s Apr 16 03:45:39.330151 kernel: raid6: avx512x1 gen() 8358 MB/s Apr 16 03:45:39.356010 kernel: raid6: avx2x4 gen() 10173 MB/s Apr 16 03:45:39.391434 kernel: raid6: avx2x2 gen() 8759 MB/s Apr 16 03:45:39.413234 kernel: raid6: avx2x1 gen() 5247 MB/s Apr 16 03:45:39.413792 kernel: raid6: using algorithm avx512x2 gen() 21001 MB/s Apr 16 03:45:39.441275 kernel: raid6: .... xor() 13501 MB/s, rmw enabled Apr 16 03:45:39.441857 kernel: raid6: using avx512x2 recovery algorithm Apr 16 03:45:39.620576 kernel: xor: automatically using best checksumming function avx Apr 16 03:45:40.650232 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 16 03:45:40.854756 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 16 03:45:40.886562 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 03:45:41.225635 systemd-udevd[455]: Using default interface naming scheme 'v255'. Apr 16 03:45:41.260927 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 03:45:41.288266 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 16 03:45:41.641461 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Apr 16 03:45:42.302513 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 03:45:42.370876 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 03:45:42.878467 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 03:45:42.923841 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 16 03:45:43.130635 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 16 03:45:43.149142 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 16 03:45:43.244238 kernel: cryptd: max_cpu_qlen set to 1000 Apr 16 03:45:43.245089 kernel: libata version 3.00 loaded. Apr 16 03:45:43.246749 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 03:45:43.246900 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 03:45:43.277614 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 03:45:43.295561 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 16 03:45:43.295598 kernel: GPT:9289727 != 19775487 Apr 16 03:45:43.295609 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 16 03:45:43.295620 kernel: GPT:9289727 != 19775487 Apr 16 03:45:43.295630 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 16 03:45:43.295641 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 03:45:43.310221 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 03:45:43.328100 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 03:45:43.356813 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 16 03:45:43.382502 kernel: ahci 0000:00:1f.2: version 3.0 Apr 16 03:45:43.382760 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 16 03:45:43.404863 kernel: AES CTR mode by8 optimization enabled Apr 16 03:45:43.458007 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 16 03:45:43.458426 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 16 03:45:43.459659 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 16 03:45:43.610454 kernel: scsi host0: ahci Apr 16 03:45:43.651154 kernel: scsi host1: ahci Apr 16 03:45:43.654101 kernel: scsi host2: ahci Apr 16 03:45:43.657752 kernel: scsi host3: ahci Apr 16 03:45:43.658204 kernel: scsi host4: ahci Apr 16 03:45:43.659098 kernel: scsi host5: ahci Apr 16 03:45:43.660340 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Apr 16 03:45:43.660388 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Apr 16 03:45:43.660404 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Apr 16 03:45:43.660417 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Apr 16 03:45:43.660444 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Apr 16 03:45:43.660457 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Apr 16 03:45:43.661419 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 16 03:45:44.148372 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 16 03:45:44.158254 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 16 03:45:44.158310 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 16 03:45:44.158324 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 16 03:45:44.158337 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 16 03:45:44.158441 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 16 03:45:44.158456 kernel: ata3.00: LPM support broken, forcing max_power Apr 16 03:45:44.158468 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 16 03:45:44.158480 kernel: ata3.00: applying bridge limits Apr 16 03:45:44.158491 kernel: ata3.00: LPM support broken, forcing max_power Apr 16 03:45:44.158503 kernel: ata3.00: configured for UDMA/100 Apr 16 03:45:44.158515 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 16 03:45:44.196403 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 16 03:45:44.214568 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 16 03:45:44.215197 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 03:45:44.278474 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 16 03:45:44.325681 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 03:45:44.350750 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 16 03:45:44.359394 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 16 03:45:44.380200 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 16 03:45:44.394648 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 16 03:45:44.665942 disk-uuid[647]: Primary Header is updated. Apr 16 03:45:44.665942 disk-uuid[647]: Secondary Entries is updated. Apr 16 03:45:44.665942 disk-uuid[647]: Secondary Header is updated. Apr 16 03:45:44.717196 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 03:45:45.547689 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 16 03:45:45.575446 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 03:45:45.596811 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 03:45:45.597466 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 03:45:45.610030 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 16 03:45:45.869643 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 16 03:45:45.933101 disk-uuid[648]: The operation has completed successfully. Apr 16 03:45:45.974266 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 16 03:45:46.084358 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 16 03:45:46.084567 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 16 03:45:46.609625 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 16 03:45:46.947827 sh[675]: Success Apr 16 03:45:47.186919 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 16 03:45:47.194484 kernel: device-mapper: uevent: version 1.0.3 Apr 16 03:45:47.203273 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 16 03:45:47.295415 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 16 03:45:47.871480 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 16 03:45:47.904482 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 16 03:45:47.957228 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 16 03:45:48.264945 kernel: BTRFS: device fsid 20ab7e7c-5d1e-4cd5-bec1-5b111d7138f2 devid 1 transid 35 /dev/mapper/usr (253:0) scanned by mount (688) Apr 16 03:45:48.313667 kernel: BTRFS info (device dm-0): first mount of filesystem 20ab7e7c-5d1e-4cd5-bec1-5b111d7138f2 Apr 16 03:45:48.314400 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 16 03:45:48.407416 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 16 03:45:48.441669 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 16 03:45:48.678219 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 16 03:45:48.814480 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 16 03:45:48.881212 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 16 03:45:48.924313 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 16 03:45:48.968327 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 16 03:45:49.120780 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (715) Apr 16 03:45:49.134730 kernel: BTRFS info (device vda6): first mount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 03:45:49.135391 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 03:45:49.249817 kernel: BTRFS info (device vda6): turning on async discard Apr 16 03:45:49.254917 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 03:45:49.327803 kernel: BTRFS info (device vda6): last unmount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 03:45:49.353156 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 16 03:45:49.367687 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 16 03:45:50.706644 ignition[756]: Ignition 2.22.0 Apr 16 03:45:50.722440 ignition[756]: Stage: fetch-offline Apr 16 03:45:50.727741 ignition[756]: no configs at "/usr/lib/ignition/base.d" Apr 16 03:45:50.727821 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 03:45:50.728111 ignition[756]: parsed url from cmdline: "" Apr 16 03:45:50.728115 ignition[756]: no config URL provided Apr 16 03:45:50.728122 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Apr 16 03:45:50.728133 ignition[756]: no config at "/usr/lib/ignition/user.ign" Apr 16 03:45:50.728251 ignition[756]: op(1): [started] loading QEMU firmware config module Apr 16 03:45:50.728257 ignition[756]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 16 03:45:50.875736 ignition[756]: op(1): [finished] loading QEMU firmware config module Apr 16 03:45:50.969180 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 03:45:50.998861 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 03:45:51.140119 ignition[756]: parsing config with SHA512: 2897a9ba40f7e8975536d8640ce1e101788d12c8ac1dc50ed6741c1f7c6ffaf7ffd7758dc97c5d9ad14863248e6aa1b85e80f9e2557f88879c199c3d7bb4818a Apr 16 03:45:51.251435 unknown[756]: fetched base config from "system" Apr 16 03:45:51.251462 unknown[756]: fetched user config from "qemu" Apr 16 03:45:51.262953 ignition[756]: fetch-offline: fetch-offline passed Apr 16 03:45:51.263146 ignition[756]: Ignition finished successfully Apr 16 03:45:51.276743 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 03:45:52.007249 systemd-networkd[866]: lo: Link UP Apr 16 03:45:52.022018 systemd-networkd[866]: lo: Gained carrier Apr 16 03:45:52.035787 systemd-networkd[866]: Enumeration completed Apr 16 03:45:52.036466 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 03:45:52.036534 systemd-networkd[866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 03:45:52.036538 systemd-networkd[866]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 03:45:52.046435 systemd[1]: Reached target network.target - Network. Apr 16 03:45:52.047416 systemd-networkd[866]: eth0: Link UP Apr 16 03:45:52.055216 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 16 03:45:52.067312 systemd-networkd[866]: eth0: Gained carrier Apr 16 03:45:52.067333 systemd-networkd[866]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 03:45:52.067927 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 16 03:45:52.175392 systemd-networkd[866]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 03:45:52.909246 ignition[870]: Ignition 2.22.0 Apr 16 03:45:52.909259 ignition[870]: Stage: kargs Apr 16 03:45:53.083707 ignition[870]: no configs at "/usr/lib/ignition/base.d" Apr 16 03:45:53.097223 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 03:45:53.135062 ignition[870]: kargs: kargs passed Apr 16 03:45:53.135161 ignition[870]: Ignition finished successfully Apr 16 03:45:53.177948 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 16 03:45:53.345265 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 16 03:45:53.955261 systemd-networkd[866]: eth0: Gained IPv6LL Apr 16 03:45:54.042849 ignition[879]: Ignition 2.22.0 Apr 16 03:45:54.042882 ignition[879]: Stage: disks Apr 16 03:45:54.053490 ignition[879]: no configs at "/usr/lib/ignition/base.d" Apr 16 03:45:54.065312 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 03:45:54.184237 ignition[879]: disks: disks passed Apr 16 03:45:54.199594 ignition[879]: Ignition finished successfully Apr 16 03:45:54.248436 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 16 03:45:54.440192 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 16 03:45:54.490521 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 16 03:45:54.538913 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 03:45:54.576260 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 03:45:54.637138 systemd[1]: Reached target basic.target - Basic System. Apr 16 03:45:54.793146 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 16 03:45:55.063447 systemd-fsck[889]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 16 03:45:55.190014 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 16 03:45:55.216932 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 16 03:45:56.646946 kernel: EXT4-fs (vda9): mounted filesystem 75cd5b5e-229f-474b-8de5-870bc4bccaf1 r/w with ordered data mode. Quota mode: none. Apr 16 03:45:56.739556 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 16 03:45:56.804659 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 16 03:45:56.836032 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 03:45:56.858935 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 16 03:45:56.861019 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 16 03:45:56.861113 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 16 03:45:56.861149 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 03:45:56.940660 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (898) Apr 16 03:45:56.928213 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 16 03:45:56.954343 kernel: BTRFS info (device vda6): first mount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 03:45:56.954404 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 03:45:56.946727 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 16 03:45:57.033553 kernel: BTRFS info (device vda6): turning on async discard Apr 16 03:45:57.052028 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 03:45:57.160528 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 03:45:58.036747 initrd-setup-root[923]: cut: /sysroot/etc/passwd: No such file or directory Apr 16 03:45:58.164398 initrd-setup-root[930]: cut: /sysroot/etc/group: No such file or directory Apr 16 03:45:58.309121 initrd-setup-root[937]: cut: /sysroot/etc/shadow: No such file or directory Apr 16 03:45:58.572359 initrd-setup-root[944]: cut: /sysroot/etc/gshadow: No such file or directory Apr 16 03:46:02.253555 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 16 03:46:02.278481 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 16 03:46:02.324290 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 16 03:46:02.491148 kernel: BTRFS info (device vda6): last unmount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 03:46:02.505729 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 16 03:46:02.761823 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 16 03:46:03.294520 ignition[1012]: INFO : Ignition 2.22.0 Apr 16 03:46:03.294520 ignition[1012]: INFO : Stage: mount Apr 16 03:46:03.329188 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 03:46:03.341543 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 03:46:03.440715 ignition[1012]: INFO : mount: mount passed Apr 16 03:46:03.444702 ignition[1012]: INFO : Ignition finished successfully Apr 16 03:46:03.554576 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 16 03:46:03.561465 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 16 03:46:03.867183 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 16 03:46:04.270026 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1026) Apr 16 03:46:04.305299 kernel: BTRFS info (device vda6): first mount of filesystem 5c620efa-1d1f-4cb4-918b-5f9d92be2a74 Apr 16 03:46:04.305783 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 16 03:46:04.445033 kernel: BTRFS info (device vda6): turning on async discard Apr 16 03:46:04.448592 kernel: BTRFS info (device vda6): enabling free space tree Apr 16 03:46:04.513459 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 16 03:46:05.866334 ignition[1043]: INFO : Ignition 2.22.0 Apr 16 03:46:05.866334 ignition[1043]: INFO : Stage: files Apr 16 03:46:05.866334 ignition[1043]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 03:46:05.866334 ignition[1043]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 03:46:05.952798 ignition[1043]: DEBUG : files: compiled without relabeling support, skipping Apr 16 03:46:06.143207 ignition[1043]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 16 03:46:06.143207 ignition[1043]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 16 03:46:06.227993 ignition[1043]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 16 03:46:06.267102 ignition[1043]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 16 03:46:06.308457 unknown[1043]: wrote ssh authorized keys file for user: core Apr 16 03:46:06.330460 ignition[1043]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 16 03:46:06.351893 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 03:46:06.366197 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 16 03:46:07.294221 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 16 03:46:09.223521 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 16 03:46:09.223521 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 16 03:46:09.223521 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 16 03:46:09.223521 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 16 03:46:09.223521 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 16 03:46:09.223521 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 03:46:09.341631 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 16 03:46:09.341631 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 03:46:09.341631 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 16 03:46:09.341631 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 03:46:09.341631 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 16 03:46:09.341631 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 16 03:46:09.341631 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 16 03:46:09.341631 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 16 03:46:09.341631 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 16 03:46:10.342952 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 16 03:46:17.720348 ignition[1043]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 16 03:46:17.720348 ignition[1043]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 16 03:46:17.736769 ignition[1043]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 03:46:17.736769 ignition[1043]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 16 03:46:17.736769 ignition[1043]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 16 03:46:17.736769 ignition[1043]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 16 03:46:17.736769 ignition[1043]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 03:46:17.736769 ignition[1043]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 16 03:46:17.736769 ignition[1043]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 16 03:46:17.736769 ignition[1043]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 16 03:46:18.697535 ignition[1043]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 03:46:18.963716 ignition[1043]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 16 03:46:19.015612 ignition[1043]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 16 03:46:19.015612 ignition[1043]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 16 03:46:19.043407 ignition[1043]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 16 03:46:19.057264 ignition[1043]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 16 03:46:19.057264 ignition[1043]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 16 03:46:19.057264 ignition[1043]: INFO : files: files passed Apr 16 03:46:19.057264 ignition[1043]: INFO : Ignition finished successfully Apr 16 03:46:19.173209 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 16 03:46:19.505931 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 16 03:46:19.553010 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 16 03:46:19.583936 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 16 03:46:19.589448 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 16 03:46:19.792580 initrd-setup-root-after-ignition[1072]: grep: /sysroot/oem/oem-release: No such file or directory Apr 16 03:46:20.012222 initrd-setup-root-after-ignition[1074]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 03:46:20.033366 initrd-setup-root-after-ignition[1074]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 16 03:46:20.053431 initrd-setup-root-after-ignition[1078]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 16 03:46:20.195486 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 03:46:20.296293 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 16 03:46:20.429935 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 16 03:46:22.069648 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 16 03:46:22.110292 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 16 03:46:22.208759 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 16 03:46:22.229880 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 16 03:46:22.278524 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 16 03:46:22.555650 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 16 03:46:23.852874 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 03:46:23.956524 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 16 03:46:24.554409 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 16 03:46:24.604921 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 03:46:24.605835 systemd[1]: Stopped target timers.target - Timer Units. Apr 16 03:46:24.642905 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 16 03:46:24.656271 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 16 03:46:24.708595 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 16 03:46:24.760033 systemd[1]: Stopped target basic.target - Basic System. Apr 16 03:46:24.822548 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 16 03:46:24.830429 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 16 03:46:24.918447 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 16 03:46:24.929916 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 16 03:46:24.978578 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 16 03:46:25.023206 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 16 03:46:25.076877 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 16 03:46:25.158579 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 16 03:46:25.205011 systemd[1]: Stopped target swap.target - Swaps. Apr 16 03:46:25.238437 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 16 03:46:25.267611 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 16 03:46:25.289469 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 16 03:46:25.310369 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 03:46:25.338405 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 16 03:46:25.374660 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 03:46:25.426956 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 16 03:46:25.427460 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 16 03:46:25.496268 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 16 03:46:25.565732 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 16 03:46:25.680389 systemd[1]: Stopped target paths.target - Path Units. Apr 16 03:46:25.708436 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 16 03:46:25.722477 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 03:46:25.768667 systemd[1]: Stopped target slices.target - Slice Units. Apr 16 03:46:25.794878 systemd[1]: Stopped target sockets.target - Socket Units. Apr 16 03:46:25.983540 systemd[1]: iscsid.socket: Deactivated successfully. Apr 16 03:46:26.017885 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 16 03:46:26.018385 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 16 03:46:26.018473 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 16 03:46:26.019135 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 16 03:46:26.019357 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 16 03:46:26.019542 systemd[1]: ignition-files.service: Deactivated successfully. Apr 16 03:46:26.019635 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 16 03:46:26.034092 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 16 03:46:26.050838 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 16 03:46:26.064676 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 03:46:26.221303 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 16 03:46:26.229294 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 16 03:46:26.232794 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 03:46:26.252659 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 16 03:46:26.265571 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 16 03:46:26.303681 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 16 03:46:26.303926 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 16 03:46:26.382427 ignition[1098]: INFO : Ignition 2.22.0 Apr 16 03:46:26.382427 ignition[1098]: INFO : Stage: umount Apr 16 03:46:26.382427 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 16 03:46:26.382427 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 16 03:46:26.453352 ignition[1098]: INFO : umount: umount passed Apr 16 03:46:26.453352 ignition[1098]: INFO : Ignition finished successfully Apr 16 03:46:26.393479 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 16 03:46:26.438003 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 16 03:46:26.438222 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 16 03:46:26.443795 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 16 03:46:26.443966 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 16 03:46:26.480440 systemd[1]: Stopped target network.target - Network. Apr 16 03:46:26.484770 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 16 03:46:26.485124 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 16 03:46:26.514580 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 16 03:46:26.519478 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 16 03:46:26.538684 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 16 03:46:26.538919 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 16 03:46:26.601104 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 16 03:46:26.613873 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 16 03:46:26.654559 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 16 03:46:26.663394 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 16 03:46:26.665807 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 16 03:46:26.747027 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 16 03:46:26.797812 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 16 03:46:26.798651 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 16 03:46:26.880608 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 16 03:46:26.893759 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 16 03:46:26.904936 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 16 03:46:26.979244 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 16 03:46:27.015334 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 16 03:46:27.048370 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 16 03:46:27.051012 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 16 03:46:27.373677 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 16 03:46:27.383155 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 16 03:46:27.383599 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 16 03:46:27.453280 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 16 03:46:27.453415 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 16 03:46:27.500990 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 16 03:46:27.504629 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 16 03:46:27.540541 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 16 03:46:27.549618 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 03:46:27.605930 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 03:46:27.775997 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 16 03:46:27.776784 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 16 03:46:27.828905 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 16 03:46:27.843012 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 03:46:27.968280 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 16 03:46:28.069231 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 16 03:46:28.087586 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 16 03:46:28.087886 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 03:46:28.104765 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 16 03:46:28.104944 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 16 03:46:28.119867 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 16 03:46:28.119966 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 16 03:46:28.155463 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 16 03:46:28.155798 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 16 03:46:28.254927 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 16 03:46:28.268621 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 16 03:46:28.279617 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 03:46:28.307944 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 16 03:46:28.308322 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 03:46:28.335859 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 16 03:46:28.341924 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 03:46:28.369624 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 16 03:46:28.369812 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 16 03:46:28.369862 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 16 03:46:28.370351 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 16 03:46:28.370483 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 16 03:46:28.382853 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 16 03:46:28.383112 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 16 03:46:28.444183 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 16 03:46:28.518348 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 16 03:46:28.686035 systemd[1]: Switching root. Apr 16 03:46:28.817134 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Apr 16 03:46:28.817471 systemd-journald[202]: Journal stopped Apr 16 03:46:56.794887 kernel: SELinux: policy capability network_peer_controls=1 Apr 16 03:46:56.794995 kernel: SELinux: policy capability open_perms=1 Apr 16 03:46:56.795032 kernel: SELinux: policy capability extended_socket_class=1 Apr 16 03:46:56.795076 kernel: SELinux: policy capability always_check_network=0 Apr 16 03:46:56.795088 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 16 03:46:56.795105 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 16 03:46:56.795120 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 16 03:46:56.795132 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 16 03:46:56.795144 kernel: SELinux: policy capability userspace_initial_context=0 Apr 16 03:46:56.795156 kernel: audit: type=1403 audit(1776311190.606:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 16 03:46:56.795171 systemd[1]: Successfully loaded SELinux policy in 667.095ms. Apr 16 03:46:56.795205 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 396.929ms. Apr 16 03:46:56.795219 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 16 03:46:56.795233 systemd[1]: Detected virtualization kvm. Apr 16 03:46:56.795246 systemd[1]: Detected architecture x86-64. Apr 16 03:46:56.795257 systemd[1]: Detected first boot. Apr 16 03:46:56.795270 systemd[1]: Initializing machine ID from VM UUID. Apr 16 03:46:56.795282 zram_generator::config[1143]: No configuration found. Apr 16 03:46:56.795296 kernel: Guest personality initialized and is inactive Apr 16 03:46:56.799342 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 16 03:46:56.799531 kernel: Initialized host personality Apr 16 03:46:56.799544 kernel: NET: Registered PF_VSOCK protocol family Apr 16 03:46:56.799559 systemd[1]: Populated /etc with preset unit settings. Apr 16 03:46:56.799577 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 16 03:46:56.799589 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 16 03:46:56.799602 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 16 03:46:56.799614 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 16 03:46:56.799628 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 16 03:46:56.799678 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 16 03:46:56.799691 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 16 03:46:56.799704 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 16 03:46:56.799716 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 16 03:46:56.799728 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 16 03:46:56.799741 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 16 03:46:56.799753 systemd[1]: Created slice user.slice - User and Session Slice. Apr 16 03:46:56.799773 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 16 03:46:56.799786 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 16 03:46:56.799814 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 16 03:46:56.799827 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 16 03:46:56.800014 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 16 03:46:56.800030 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 16 03:46:56.800074 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 16 03:46:56.800089 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 16 03:46:56.800101 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 16 03:46:56.800131 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 16 03:46:56.800143 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 16 03:46:56.800156 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 16 03:46:56.800169 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 16 03:46:56.800181 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 16 03:46:56.800193 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 16 03:46:56.800206 systemd[1]: Reached target slices.target - Slice Units. Apr 16 03:46:56.800218 systemd[1]: Reached target swap.target - Swaps. Apr 16 03:46:56.800231 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 16 03:46:56.800242 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 16 03:46:56.800272 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 16 03:46:56.800285 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 16 03:46:56.800298 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 16 03:46:56.800312 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 16 03:46:56.800324 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 16 03:46:56.800342 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 16 03:46:56.800356 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 16 03:46:56.800368 systemd[1]: Mounting media.mount - External Media Directory... Apr 16 03:46:56.800381 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 03:46:56.800408 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 16 03:46:56.800420 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 16 03:46:56.800433 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 16 03:46:56.800446 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 16 03:46:56.800458 systemd[1]: Reached target machines.target - Containers. Apr 16 03:46:56.800471 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 16 03:46:56.800483 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 03:46:56.800496 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 16 03:46:56.800522 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 16 03:46:56.800536 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 03:46:56.800563 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 03:46:56.800577 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 03:46:56.800600 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 16 03:46:56.800613 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 03:46:56.800627 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 16 03:46:56.800638 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 16 03:46:56.800664 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 16 03:46:56.800677 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 16 03:46:56.800688 systemd[1]: Stopped systemd-fsck-usr.service. Apr 16 03:46:56.800701 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 03:46:56.800714 kernel: fuse: init (API version 7.41) Apr 16 03:46:56.800729 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 16 03:46:56.800742 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 16 03:46:56.800755 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 16 03:46:56.800768 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 16 03:46:56.800797 kernel: loop: module loaded Apr 16 03:46:56.800816 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 16 03:46:56.800828 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 16 03:46:56.800858 systemd[1]: verity-setup.service: Deactivated successfully. Apr 16 03:46:56.800872 systemd[1]: Stopped verity-setup.service. Apr 16 03:46:56.800902 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 03:46:56.800915 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 16 03:46:56.800927 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 16 03:46:56.800940 systemd[1]: Mounted media.mount - External Media Directory. Apr 16 03:46:56.800974 kernel: ACPI: bus type drm_connector registered Apr 16 03:46:56.800998 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 16 03:46:56.801012 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 16 03:46:56.801024 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 16 03:46:56.801036 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 16 03:46:56.801136 systemd-journald[1227]: Collecting audit messages is disabled. Apr 16 03:46:56.801166 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 16 03:46:56.801179 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 16 03:46:56.801211 systemd-journald[1227]: Journal started Apr 16 03:46:56.801236 systemd-journald[1227]: Runtime Journal (/run/log/journal/510b361a9fe2404097164285bddd1d99) is 6M, max 48.2M, 42.2M free. Apr 16 03:46:50.833442 systemd[1]: Queued start job for default target multi-user.target. Apr 16 03:46:51.060769 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 16 03:46:51.115578 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 16 03:46:51.253921 systemd[1]: systemd-journald.service: Consumed 2.147s CPU time. Apr 16 03:46:56.808881 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 16 03:46:56.822140 systemd[1]: Started systemd-journald.service - Journal Service. Apr 16 03:46:56.829513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 03:46:56.829873 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 03:46:56.834958 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 03:46:56.835354 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 03:46:56.854997 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 03:46:56.871544 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 03:46:56.884823 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 16 03:46:56.891711 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 16 03:46:56.912032 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 03:46:56.913340 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 03:46:56.934655 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 16 03:46:56.945067 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 16 03:46:56.982668 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 16 03:46:57.001824 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 16 03:46:57.063535 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 16 03:46:57.163441 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 16 03:46:57.251113 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 16 03:46:57.254369 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 16 03:46:57.254427 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 16 03:46:57.289791 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 16 03:46:57.424626 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 16 03:46:57.444279 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 03:46:57.466304 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 16 03:46:57.494939 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 16 03:46:57.505448 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 03:46:57.520330 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 16 03:46:57.530140 systemd-journald[1227]: Time spent on flushing to /var/log/journal/510b361a9fe2404097164285bddd1d99 is 102.293ms for 980 entries. Apr 16 03:46:57.530140 systemd-journald[1227]: System Journal (/var/log/journal/510b361a9fe2404097164285bddd1d99) is 8M, max 195.6M, 187.6M free. Apr 16 03:46:57.761221 systemd-journald[1227]: Received client request to flush runtime journal. Apr 16 03:46:57.527486 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 03:46:57.532626 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 16 03:46:57.583622 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 16 03:46:57.597499 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 16 03:46:57.774963 kernel: loop0: detected capacity change from 0 to 217752 Apr 16 03:46:57.616007 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 16 03:46:57.623237 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 16 03:46:57.634500 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 16 03:46:57.652366 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 16 03:46:57.672178 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 16 03:46:57.746536 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 16 03:46:57.781343 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 16 03:46:57.892069 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 16 03:46:57.948602 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 16 03:46:58.017766 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 16 03:46:58.027909 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 16 03:46:58.041212 kernel: loop1: detected capacity change from 0 to 128560 Apr 16 03:46:58.072392 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 16 03:46:58.116762 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 16 03:46:58.277011 kernel: loop2: detected capacity change from 0 to 110984 Apr 16 03:46:58.427622 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Apr 16 03:46:58.427664 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Apr 16 03:46:58.458034 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 16 03:46:58.528373 kernel: loop3: detected capacity change from 0 to 217752 Apr 16 03:46:58.585139 kernel: loop4: detected capacity change from 0 to 128560 Apr 16 03:46:58.806510 kernel: loop5: detected capacity change from 0 to 110984 Apr 16 03:46:58.875739 (sd-merge)[1286]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 16 03:46:58.876536 (sd-merge)[1286]: Merged extensions into '/usr'. Apr 16 03:46:58.893119 systemd[1]: Reload requested from client PID 1262 ('systemd-sysext') (unit systemd-sysext.service)... Apr 16 03:46:58.893144 systemd[1]: Reloading... Apr 16 03:46:59.339157 zram_generator::config[1314]: No configuration found. Apr 16 03:47:00.732209 systemd[1]: Reloading finished in 1838 ms. Apr 16 03:47:00.751440 ldconfig[1257]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 16 03:47:00.874311 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 16 03:47:00.898226 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 16 03:47:01.076188 systemd[1]: Starting ensure-sysext.service... Apr 16 03:47:01.105493 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 16 03:47:01.330388 systemd[1]: Reload requested from client PID 1349 ('systemctl') (unit ensure-sysext.service)... Apr 16 03:47:01.330875 systemd[1]: Reloading... Apr 16 03:47:01.408717 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 16 03:47:01.410173 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 16 03:47:01.410586 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 03:47:01.410923 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 03:47:01.417354 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 03:47:01.417713 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Apr 16 03:47:01.417779 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Apr 16 03:47:01.461495 systemd-tmpfiles[1350]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 03:47:01.461533 systemd-tmpfiles[1350]: Skipping /boot Apr 16 03:47:01.511763 systemd-tmpfiles[1350]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 03:47:01.512775 systemd-tmpfiles[1350]: Skipping /boot Apr 16 03:47:01.825146 zram_generator::config[1380]: No configuration found. Apr 16 03:47:12.258628 systemd[1]: Reloading finished in 10926 ms. Apr 16 03:47:12.597769 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 16 03:47:12.709779 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 16 03:47:12.802628 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 03:47:12.928275 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 16 03:47:12.943292 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 16 03:47:12.962541 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 16 03:47:13.013433 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 16 03:47:13.035738 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 16 03:47:13.077397 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 03:47:13.077754 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 03:47:13.128695 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 03:47:13.139726 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 03:47:13.171639 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 03:47:13.234951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 03:47:13.245624 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 03:47:13.265226 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 16 03:47:13.269802 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 03:47:13.316261 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 03:47:13.316619 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 03:47:13.382824 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 03:47:13.388453 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 03:47:13.417869 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 03:47:13.427680 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 03:47:13.427935 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 03:47:13.428152 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 03:47:13.438816 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 16 03:47:13.445580 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 16 03:47:13.453851 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 03:47:13.465871 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 03:47:13.469462 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 03:47:13.469728 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 03:47:13.517740 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 03:47:13.519161 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 03:47:13.528666 systemd-udevd[1426]: Using default interface naming scheme 'v255'. Apr 16 03:47:13.537716 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 03:47:13.542775 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 16 03:47:13.559591 augenrules[1452]: No rules Apr 16 03:47:13.567661 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 16 03:47:13.586428 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 16 03:47:13.645446 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 16 03:47:13.672369 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 16 03:47:13.690447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 16 03:47:13.690860 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 16 03:47:13.769955 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 16 03:47:13.781323 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 16 03:47:13.783254 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 03:47:13.817695 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 03:47:13.835243 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 16 03:47:13.849645 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 16 03:47:13.873446 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 16 03:47:13.887675 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 16 03:47:13.897550 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 16 03:47:13.907531 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 16 03:47:13.914582 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 16 03:47:13.944785 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 16 03:47:13.945988 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 16 03:47:13.960652 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 16 03:47:13.960890 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 16 03:47:13.985529 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 16 03:47:14.022128 systemd[1]: Finished ensure-sysext.service. Apr 16 03:47:14.108317 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 16 03:47:14.111194 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 16 03:47:14.111446 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 16 03:47:14.115434 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 16 03:47:14.120392 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 16 03:47:14.335601 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 16 03:47:14.755322 kernel: mousedev: PS/2 mouse device common for all mice Apr 16 03:47:14.804763 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 16 03:47:14.822879 kernel: ACPI: button: Power Button [PWRF] Apr 16 03:47:14.879981 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 16 03:47:14.897282 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 16 03:47:15.124218 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 16 03:47:15.139705 systemd-resolved[1425]: Positive Trust Anchors: Apr 16 03:47:15.139721 systemd-resolved[1425]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 16 03:47:15.139754 systemd-resolved[1425]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 16 03:47:15.162729 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 16 03:47:15.169856 systemd[1]: Reached target time-set.target - System Time Set. Apr 16 03:47:15.177508 systemd-resolved[1425]: Defaulting to hostname 'linux'. Apr 16 03:47:15.180754 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 16 03:47:15.190611 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 16 03:47:15.230282 systemd-networkd[1502]: lo: Link UP Apr 16 03:47:15.230312 systemd-networkd[1502]: lo: Gained carrier Apr 16 03:47:15.234362 systemd[1]: Reached target sysinit.target - System Initialization. Apr 16 03:47:15.235441 systemd-networkd[1502]: Enumeration completed Apr 16 03:47:15.236094 systemd-networkd[1502]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 03:47:15.236099 systemd-networkd[1502]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 16 03:47:15.242433 systemd-networkd[1502]: eth0: Link UP Apr 16 03:47:15.242588 systemd-networkd[1502]: eth0: Gained carrier Apr 16 03:47:15.242718 systemd-networkd[1502]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 16 03:47:15.258581 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 16 03:47:15.280785 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 16 03:47:15.286557 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 16 03:47:15.299505 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 16 03:47:15.307169 systemd-networkd[1502]: eth0: DHCPv4 address 10.0.0.115/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 16 03:47:15.307611 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 16 03:47:15.315904 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 16 03:47:15.325532 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 16 03:47:15.325596 systemd[1]: Reached target paths.target - Path Units. Apr 16 03:47:15.364236 systemd[1]: Reached target timers.target - Timer Units. Apr 16 03:47:15.393297 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 16 03:47:15.393695 systemd-timesyncd[1503]: Network configuration changed, trying to establish connection. Apr 16 03:47:16.133514 systemd-timesyncd[1503]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 16 03:47:16.133629 systemd-timesyncd[1503]: Initial clock synchronization to Thu 2026-04-16 03:47:16.133305 UTC. Apr 16 03:47:16.141773 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 16 03:47:16.176473 systemd-resolved[1425]: Clock change detected. Flushing caches. Apr 16 03:47:16.188737 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 16 03:47:16.199767 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 16 03:47:16.207122 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 16 03:47:16.232603 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 16 03:47:16.237069 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 16 03:47:16.255817 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 16 03:47:16.260329 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 16 03:47:16.282523 systemd[1]: Reached target network.target - Network. Apr 16 03:47:16.284495 systemd[1]: Reached target sockets.target - Socket Units. Apr 16 03:47:16.292474 systemd[1]: Reached target basic.target - Basic System. Apr 16 03:47:16.296772 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 16 03:47:16.296824 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 16 03:47:16.372208 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 16 03:47:16.372803 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 16 03:47:16.371144 systemd[1]: Starting containerd.service - containerd container runtime... Apr 16 03:47:16.390680 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 16 03:47:16.405635 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 16 03:47:16.449501 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 16 03:47:16.481808 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 16 03:47:16.484623 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 16 03:47:16.535726 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 16 03:47:16.546056 jq[1538]: false Apr 16 03:47:16.560800 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 16 03:47:16.611845 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 16 03:47:16.642539 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 16 03:47:16.664660 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 16 03:47:16.693414 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 16 03:47:16.700641 extend-filesystems[1539]: Found /dev/vda6 Apr 16 03:47:16.717230 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 16 03:47:16.748186 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Refreshing passwd entry cache Apr 16 03:47:16.746869 oslogin_cache_refresh[1540]: Refreshing passwd entry cache Apr 16 03:47:16.750542 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 16 03:47:16.775783 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 16 03:47:16.776881 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 16 03:47:16.777627 extend-filesystems[1539]: Found /dev/vda9 Apr 16 03:47:16.782281 extend-filesystems[1539]: Checking size of /dev/vda9 Apr 16 03:47:16.783300 systemd[1]: Starting update-engine.service - Update Engine... Apr 16 03:47:16.827413 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 16 03:47:16.889070 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 16 03:47:16.902374 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Failure getting users, quitting Apr 16 03:47:16.902374 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 03:47:16.902374 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Refreshing group entry cache Apr 16 03:47:16.897832 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 16 03:47:16.897353 oslogin_cache_refresh[1540]: Failure getting users, quitting Apr 16 03:47:16.901627 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 16 03:47:16.897375 oslogin_cache_refresh[1540]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 16 03:47:16.897546 oslogin_cache_refresh[1540]: Refreshing group entry cache Apr 16 03:47:16.909238 extend-filesystems[1539]: Resized partition /dev/vda9 Apr 16 03:47:16.910721 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 16 03:47:16.911005 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 16 03:47:16.919476 extend-filesystems[1568]: resize2fs 1.47.3 (8-Jul-2025) Apr 16 03:47:16.920626 systemd[1]: motdgen.service: Deactivated successfully. Apr 16 03:47:16.928004 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 16 03:47:16.941865 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Failure getting groups, quitting Apr 16 03:47:16.941865 google_oslogin_nss_cache[1540]: oslogin_cache_refresh[1540]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 03:47:16.935740 oslogin_cache_refresh[1540]: Failure getting groups, quitting Apr 16 03:47:16.935758 oslogin_cache_refresh[1540]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 16 03:47:16.961675 jq[1560]: true Apr 16 03:47:16.973046 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 16 03:47:17.010870 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 16 03:47:17.014601 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 16 03:47:17.027909 jq[1574]: true Apr 16 03:47:17.048544 systemd-networkd[1502]: eth0: Gained IPv6LL Apr 16 03:47:17.054890 (ntainerd)[1575]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 16 03:47:17.152673 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 16 03:47:17.212031 sshd_keygen[1576]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 16 03:47:17.249595 update_engine[1555]: I20260416 03:47:17.249473 1555 main.cc:92] Flatcar Update Engine starting Apr 16 03:47:17.281546 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 16 03:47:17.294537 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 16 03:47:17.285438 systemd[1]: Reached target network-online.target - Network is Online. Apr 16 03:47:17.359462 tar[1567]: linux-amd64/LICENSE Apr 16 03:47:17.359986 extend-filesystems[1568]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 16 03:47:17.359986 extend-filesystems[1568]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 16 03:47:17.359986 extend-filesystems[1568]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 16 03:47:17.311005 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 16 03:47:17.520836 tar[1567]: linux-amd64/helm Apr 16 03:47:17.560286 extend-filesystems[1539]: Resized filesystem in /dev/vda9 Apr 16 03:47:17.360481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:47:17.454955 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 16 03:47:17.480427 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 16 03:47:17.492284 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 16 03:47:17.492650 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 16 03:47:17.616668 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 16 03:47:17.663497 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 16 03:47:17.717304 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 16 03:47:17.723807 systemd[1]: Started sshd@0-10.0.0.115:22-10.0.0.1:57156.service - OpenSSH per-connection server daemon (10.0.0.1:57156). Apr 16 03:47:17.752346 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 16 03:47:17.753599 bash[1617]: Updated "/home/core/.ssh/authorized_keys" Apr 16 03:47:17.768316 systemd-logind[1549]: Watching system buttons on /dev/input/event2 (Power Button) Apr 16 03:47:17.768364 systemd-logind[1549]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 16 03:47:17.773765 systemd[1]: issuegen.service: Deactivated successfully. Apr 16 03:47:17.774141 systemd-logind[1549]: New seat seat0. Apr 16 03:47:17.774149 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 16 03:47:17.781892 systemd[1]: Started systemd-logind.service - User Login Management. Apr 16 03:47:17.791486 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 16 03:47:17.797887 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 16 03:47:17.814954 dbus-daemon[1536]: [system] SELinux support is enabled Apr 16 03:47:17.982119 dbus-daemon[1536]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 16 03:47:17.977334 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 16 03:47:18.037720 update_engine[1555]: I20260416 03:47:18.035701 1555 update_check_scheduler.cc:74] Next update check in 4m4s Apr 16 03:47:17.980164 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 16 03:47:17.980402 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 16 03:47:17.984539 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 16 03:47:17.984954 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 16 03:47:17.985432 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 16 03:47:17.985831 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 16 03:47:17.986281 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 16 03:47:18.028022 systemd[1]: Started update-engine.service - Update Engine. Apr 16 03:47:18.042870 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 16 03:47:18.219386 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 16 03:47:18.227551 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 16 03:47:18.245755 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 16 03:47:18.255009 systemd[1]: Reached target getty.target - Login Prompts. Apr 16 03:47:18.306312 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 16 03:47:18.884675 containerd[1575]: time="2026-04-16T03:47:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 16 03:47:18.884675 containerd[1575]: time="2026-04-16T03:47:18.875819890Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 16 03:47:18.898482 locksmithd[1641]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 16 03:47:18.928916 containerd[1575]: time="2026-04-16T03:47:18.923803384Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="15.226µs" Apr 16 03:47:18.928916 containerd[1575]: time="2026-04-16T03:47:18.924177654Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 16 03:47:18.928916 containerd[1575]: time="2026-04-16T03:47:18.924209701Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 16 03:47:18.928916 containerd[1575]: time="2026-04-16T03:47:18.925297822Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 16 03:47:18.928916 containerd[1575]: time="2026-04-16T03:47:18.925342417Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 16 03:47:18.928916 containerd[1575]: time="2026-04-16T03:47:18.925461071Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 03:47:18.928916 containerd[1575]: time="2026-04-16T03:47:18.925525317Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 16 03:47:18.928916 containerd[1575]: time="2026-04-16T03:47:18.925538882Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 03:47:18.928916 containerd[1575]: time="2026-04-16T03:47:18.925993725Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 16 03:47:18.928916 containerd[1575]: time="2026-04-16T03:47:18.926016992Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 03:47:18.928916 containerd[1575]: time="2026-04-16T03:47:18.926031195Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 16 03:47:18.928916 containerd[1575]: time="2026-04-16T03:47:18.926040893Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 16 03:47:18.936036 containerd[1575]: time="2026-04-16T03:47:18.929064195Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 16 03:47:18.936036 containerd[1575]: time="2026-04-16T03:47:18.933213314Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 03:47:18.936036 containerd[1575]: time="2026-04-16T03:47:18.933544682Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 16 03:47:18.936036 containerd[1575]: time="2026-04-16T03:47:18.933558328Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 16 03:47:18.936036 containerd[1575]: time="2026-04-16T03:47:18.933598645Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 16 03:47:18.936036 containerd[1575]: time="2026-04-16T03:47:18.934006463Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 16 03:47:18.936036 containerd[1575]: time="2026-04-16T03:47:18.934230385Z" level=info msg="metadata content store policy set" policy=shared Apr 16 03:47:19.081841 containerd[1575]: time="2026-04-16T03:47:19.076837587Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 16 03:47:19.081841 containerd[1575]: time="2026-04-16T03:47:19.077149214Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 16 03:47:19.081841 containerd[1575]: time="2026-04-16T03:47:19.077177939Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 16 03:47:19.081841 containerd[1575]: time="2026-04-16T03:47:19.077195082Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 16 03:47:19.081841 containerd[1575]: time="2026-04-16T03:47:19.077210384Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 16 03:47:19.081841 containerd[1575]: time="2026-04-16T03:47:19.077222165Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 16 03:47:19.081841 containerd[1575]: time="2026-04-16T03:47:19.077245875Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 16 03:47:19.081841 containerd[1575]: time="2026-04-16T03:47:19.077266169Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 16 03:47:19.081841 containerd[1575]: time="2026-04-16T03:47:19.077280713Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 16 03:47:19.081841 containerd[1575]: time="2026-04-16T03:47:19.077292229Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 16 03:47:19.081841 containerd[1575]: time="2026-04-16T03:47:19.077302632Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 16 03:47:19.081841 containerd[1575]: time="2026-04-16T03:47:19.077316305Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 16 03:47:19.081841 containerd[1575]: time="2026-04-16T03:47:19.078181936Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 16 03:47:19.081841 containerd[1575]: time="2026-04-16T03:47:19.078211641Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 16 03:47:19.173770 containerd[1575]: time="2026-04-16T03:47:19.078228960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 16 03:47:19.173770 containerd[1575]: time="2026-04-16T03:47:19.078241858Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 16 03:47:19.173770 containerd[1575]: time="2026-04-16T03:47:19.078254761Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 16 03:47:19.173770 containerd[1575]: time="2026-04-16T03:47:19.078268420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 16 03:47:19.173770 containerd[1575]: time="2026-04-16T03:47:19.078281560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 16 03:47:19.173770 containerd[1575]: time="2026-04-16T03:47:19.078297375Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 16 03:47:19.173770 containerd[1575]: time="2026-04-16T03:47:19.078310256Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 16 03:47:19.173770 containerd[1575]: time="2026-04-16T03:47:19.078324535Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 16 03:47:19.173770 containerd[1575]: time="2026-04-16T03:47:19.078337305Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 16 03:47:19.173770 containerd[1575]: time="2026-04-16T03:47:19.081313608Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 16 03:47:19.173770 containerd[1575]: time="2026-04-16T03:47:19.081622326Z" level=info msg="Start snapshots syncer" Apr 16 03:47:19.173770 containerd[1575]: time="2026-04-16T03:47:19.081763076Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 16 03:47:19.175159 containerd[1575]: time="2026-04-16T03:47:19.175056705Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 16 03:47:19.175662 containerd[1575]: time="2026-04-16T03:47:19.175621030Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 16 03:47:19.175819 containerd[1575]: time="2026-04-16T03:47:19.175804537Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 16 03:47:19.197270 containerd[1575]: time="2026-04-16T03:47:19.196761411Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 16 03:47:19.197270 containerd[1575]: time="2026-04-16T03:47:19.196913095Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 16 03:47:19.197270 containerd[1575]: time="2026-04-16T03:47:19.196930996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 16 03:47:19.197270 containerd[1575]: time="2026-04-16T03:47:19.196942590Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 16 03:47:19.201621 containerd[1575]: time="2026-04-16T03:47:19.200349332Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 16 03:47:19.201621 containerd[1575]: time="2026-04-16T03:47:19.200578300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 16 03:47:19.201621 containerd[1575]: time="2026-04-16T03:47:19.200593623Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 16 03:47:19.201621 containerd[1575]: time="2026-04-16T03:47:19.200633883Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 16 03:47:19.201621 containerd[1575]: time="2026-04-16T03:47:19.200645245Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 16 03:47:19.201621 containerd[1575]: time="2026-04-16T03:47:19.200659278Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 16 03:47:19.201621 containerd[1575]: time="2026-04-16T03:47:19.201414047Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 03:47:19.201621 containerd[1575]: time="2026-04-16T03:47:19.201449530Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 16 03:47:19.201621 containerd[1575]: time="2026-04-16T03:47:19.201461473Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 03:47:19.201621 containerd[1575]: time="2026-04-16T03:47:19.201471596Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 16 03:47:19.201621 containerd[1575]: time="2026-04-16T03:47:19.201480087Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 16 03:47:19.201621 containerd[1575]: time="2026-04-16T03:47:19.201490294Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 16 03:47:19.201621 containerd[1575]: time="2026-04-16T03:47:19.201513107Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 16 03:47:19.201621 containerd[1575]: time="2026-04-16T03:47:19.201531863Z" level=info msg="runtime interface created" Apr 16 03:47:19.203277 containerd[1575]: time="2026-04-16T03:47:19.201537401Z" level=info msg="created NRI interface" Apr 16 03:47:19.203277 containerd[1575]: time="2026-04-16T03:47:19.201546187Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 16 03:47:19.203277 containerd[1575]: time="2026-04-16T03:47:19.201566401Z" level=info msg="Connect containerd service" Apr 16 03:47:19.203277 containerd[1575]: time="2026-04-16T03:47:19.201597348Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 16 03:47:19.207048 sshd[1625]: Accepted publickey for core from 10.0.0.1 port 57156 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:47:19.207614 containerd[1575]: time="2026-04-16T03:47:19.205433842Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 16 03:47:19.227187 sshd-session[1625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:47:19.259535 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 16 03:47:19.555389 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 16 03:47:20.038076 systemd-logind[1549]: New session 1 of user core. Apr 16 03:47:20.114508 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 16 03:47:20.162502 containerd[1575]: time="2026-04-16T03:47:20.161427644Z" level=info msg="Start subscribing containerd event" Apr 16 03:47:20.162502 containerd[1575]: time="2026-04-16T03:47:20.161572228Z" level=info msg="Start recovering state" Apr 16 03:47:20.162502 containerd[1575]: time="2026-04-16T03:47:20.162164808Z" level=info msg="Start event monitor" Apr 16 03:47:20.162502 containerd[1575]: time="2026-04-16T03:47:20.162193966Z" level=info msg="Start cni network conf syncer for default" Apr 16 03:47:20.162502 containerd[1575]: time="2026-04-16T03:47:20.162212735Z" level=info msg="Start streaming server" Apr 16 03:47:20.162502 containerd[1575]: time="2026-04-16T03:47:20.162221782Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 16 03:47:20.162502 containerd[1575]: time="2026-04-16T03:47:20.162235837Z" level=info msg="runtime interface starting up..." Apr 16 03:47:20.162502 containerd[1575]: time="2026-04-16T03:47:20.162242553Z" level=info msg="starting plugins..." Apr 16 03:47:20.162502 containerd[1575]: time="2026-04-16T03:47:20.162334706Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 16 03:47:20.175557 containerd[1575]: time="2026-04-16T03:47:20.175385345Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 16 03:47:20.175682 containerd[1575]: time="2026-04-16T03:47:20.175561313Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 16 03:47:20.175682 containerd[1575]: time="2026-04-16T03:47:20.175633887Z" level=info msg="containerd successfully booted in 1.312118s" Apr 16 03:47:20.263999 tar[1567]: linux-amd64/README.md Apr 16 03:47:20.329635 systemd[1]: Started containerd.service - containerd container runtime. Apr 16 03:47:20.653194 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 16 03:47:20.686539 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 16 03:47:20.826177 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 16 03:47:20.826955 (systemd)[1681]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 16 03:47:21.159834 systemd-logind[1549]: New session c1 of user core. Apr 16 03:47:23.380774 systemd[1681]: Queued start job for default target default.target. Apr 16 03:47:23.513874 systemd[1681]: Created slice app.slice - User Application Slice. Apr 16 03:47:23.513965 systemd[1681]: Reached target paths.target - Paths. Apr 16 03:47:23.514446 systemd[1681]: Reached target timers.target - Timers. Apr 16 03:47:23.537620 systemd[1681]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 16 03:47:23.874648 systemd[1681]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 16 03:47:23.875893 systemd[1681]: Reached target sockets.target - Sockets. Apr 16 03:47:23.876027 systemd[1681]: Reached target basic.target - Basic System. Apr 16 03:47:23.876078 systemd[1681]: Reached target default.target - Main User Target. Apr 16 03:47:23.876156 systemd[1681]: Startup finished in 2.580s. Apr 16 03:47:23.876289 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 16 03:47:23.904152 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 16 03:47:24.149142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:47:24.152461 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 16 03:47:24.167565 systemd[1]: Started sshd@1-10.0.0.115:22-10.0.0.1:36722.service - OpenSSH per-connection server daemon (10.0.0.1:36722). Apr 16 03:47:24.174655 systemd[1]: Startup finished in 8.928s (kernel) + 56.195s (initrd) + 53.425s (userspace) = 1min 58.549s. Apr 16 03:47:24.249640 (kubelet)[1702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:47:25.697598 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 36722 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:47:25.700423 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:47:25.983143 systemd-logind[1549]: New session 2 of user core. Apr 16 03:47:26.146809 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 16 03:47:26.659219 sshd[1712]: Connection closed by 10.0.0.1 port 36722 Apr 16 03:47:26.701210 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Apr 16 03:47:26.990860 systemd[1]: sshd@1-10.0.0.115:22-10.0.0.1:36722.service: Deactivated successfully. Apr 16 03:47:27.072511 systemd[1]: session-2.scope: Deactivated successfully. Apr 16 03:47:27.081250 systemd-logind[1549]: Session 2 logged out. Waiting for processes to exit. Apr 16 03:47:27.130548 systemd[1]: Started sshd@2-10.0.0.115:22-10.0.0.1:39478.service - OpenSSH per-connection server daemon (10.0.0.1:39478). Apr 16 03:47:27.217250 systemd-logind[1549]: Removed session 2. Apr 16 03:47:28.294189 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 39478 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:47:28.404553 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:47:28.677239 systemd-logind[1549]: New session 3 of user core. Apr 16 03:47:28.818418 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 16 03:47:29.387595 sshd[1723]: Connection closed by 10.0.0.1 port 39478 Apr 16 03:47:29.461907 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Apr 16 03:47:29.726161 systemd[1]: sshd@2-10.0.0.115:22-10.0.0.1:39478.service: Deactivated successfully. Apr 16 03:47:29.760272 systemd[1]: session-3.scope: Deactivated successfully. Apr 16 03:47:29.770734 systemd-logind[1549]: Session 3 logged out. Waiting for processes to exit. Apr 16 03:47:29.796754 systemd[1]: Started sshd@3-10.0.0.115:22-10.0.0.1:39486.service - OpenSSH per-connection server daemon (10.0.0.1:39486). Apr 16 03:47:29.819319 systemd-logind[1549]: Removed session 3. Apr 16 03:47:30.687961 kubelet[1702]: E0416 03:47:30.675685 1702 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:47:30.732926 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:47:30.739906 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:47:30.740745 systemd[1]: kubelet.service: Consumed 3.014s CPU time, 258.4M memory peak. Apr 16 03:47:31.813715 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 39486 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:47:31.820352 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:47:31.852278 systemd-logind[1549]: New session 4 of user core. Apr 16 03:47:31.961972 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 16 03:47:33.282206 sshd[1733]: Connection closed by 10.0.0.1 port 39486 Apr 16 03:47:33.296454 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Apr 16 03:47:33.562645 systemd[1]: sshd@3-10.0.0.115:22-10.0.0.1:39486.service: Deactivated successfully. Apr 16 03:47:33.707731 systemd[1]: session-4.scope: Deactivated successfully. Apr 16 03:47:33.902336 systemd-logind[1549]: Session 4 logged out. Waiting for processes to exit. Apr 16 03:47:34.335176 systemd-logind[1549]: Removed session 4. Apr 16 03:47:34.347747 systemd[1]: Started sshd@4-10.0.0.115:22-10.0.0.1:39496.service - OpenSSH per-connection server daemon (10.0.0.1:39496). Apr 16 03:47:37.991245 sshd[1739]: Accepted publickey for core from 10.0.0.1 port 39496 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:47:38.161692 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:47:38.794748 systemd-logind[1549]: New session 5 of user core. Apr 16 03:47:38.880343 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 16 03:47:39.670346 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 16 03:47:39.670976 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 03:47:39.785479 sudo[1743]: pam_unix(sudo:session): session closed for user root Apr 16 03:47:39.839006 sshd[1742]: Connection closed by 10.0.0.1 port 39496 Apr 16 03:47:39.835424 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Apr 16 03:47:39.989145 systemd[1]: Started sshd@5-10.0.0.115:22-10.0.0.1:39190.service - OpenSSH per-connection server daemon (10.0.0.1:39190). Apr 16 03:47:40.000628 systemd[1]: sshd@4-10.0.0.115:22-10.0.0.1:39496.service: Deactivated successfully. Apr 16 03:47:40.008598 systemd[1]: sshd@4-10.0.0.115:22-10.0.0.1:39496.service: Consumed 1.092s CPU time, 4M memory peak. Apr 16 03:47:40.071203 systemd[1]: session-5.scope: Deactivated successfully. Apr 16 03:47:40.234746 systemd-logind[1549]: Session 5 logged out. Waiting for processes to exit. Apr 16 03:47:40.249242 systemd-logind[1549]: Removed session 5. Apr 16 03:47:40.769492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 16 03:47:40.864119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:47:40.992725 sshd[1746]: Accepted publickey for core from 10.0.0.1 port 39190 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:47:41.012429 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:47:41.276676 systemd-logind[1549]: New session 6 of user core. Apr 16 03:47:41.355877 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 16 03:47:41.661905 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 16 03:47:41.663705 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 03:47:41.837244 sudo[1757]: pam_unix(sudo:session): session closed for user root Apr 16 03:47:42.029187 sudo[1756]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 16 03:47:42.031577 sudo[1756]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 03:47:42.114781 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 16 03:47:42.386298 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:47:42.582055 (kubelet)[1771]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:47:42.910409 augenrules[1791]: No rules Apr 16 03:47:43.093796 systemd[1]: audit-rules.service: Deactivated successfully. Apr 16 03:47:43.121894 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 16 03:47:43.147357 sudo[1756]: pam_unix(sudo:session): session closed for user root Apr 16 03:47:43.183887 sshd[1755]: Connection closed by 10.0.0.1 port 39190 Apr 16 03:47:43.236843 kubelet[1771]: E0416 03:47:43.236747 1771 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:47:43.248761 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Apr 16 03:47:43.276035 systemd[1]: sshd@5-10.0.0.115:22-10.0.0.1:39190.service: Deactivated successfully. Apr 16 03:47:43.298335 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:47:43.299330 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:47:43.311723 systemd[1]: kubelet.service: Consumed 852ms CPU time, 110.7M memory peak. Apr 16 03:47:43.326613 systemd[1]: session-6.scope: Deactivated successfully. Apr 16 03:47:43.695442 systemd-logind[1549]: Session 6 logged out. Waiting for processes to exit. Apr 16 03:47:43.751690 systemd[1]: Started sshd@6-10.0.0.115:22-10.0.0.1:39198.service - OpenSSH per-connection server daemon (10.0.0.1:39198). Apr 16 03:47:43.810995 systemd-logind[1549]: Removed session 6. Apr 16 03:47:44.903920 sshd[1801]: Accepted publickey for core from 10.0.0.1 port 39198 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:47:44.911439 sshd-session[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:47:45.547937 systemd-logind[1549]: New session 7 of user core. Apr 16 03:47:45.616789 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 16 03:47:46.204485 sudo[1805]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 16 03:47:46.220317 sudo[1805]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 16 03:47:52.067636 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 16 03:47:52.174839 (dockerd)[1826]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 16 03:47:53.416873 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 16 03:47:53.496648 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:47:54.864727 dockerd[1826]: time="2026-04-16T03:47:54.864267723Z" level=info msg="Starting up" Apr 16 03:47:54.898742 dockerd[1826]: time="2026-04-16T03:47:54.898020382Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 16 03:47:55.167740 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:47:55.208262 dockerd[1826]: time="2026-04-16T03:47:55.207333737Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 16 03:47:55.314708 (kubelet)[1855]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:47:55.769456 kubelet[1855]: E0416 03:47:55.758921 1855 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:47:55.793742 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:47:55.794185 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:47:55.810068 dockerd[1826]: time="2026-04-16T03:47:55.809815948Z" level=info msg="Loading containers: start." Apr 16 03:47:55.811753 systemd[1]: kubelet.service: Consumed 711ms CPU time, 112.6M memory peak. Apr 16 03:47:55.916276 kernel: Initializing XFRM netlink socket Apr 16 03:48:03.637917 update_engine[1555]: I20260416 03:48:03.636430 1555 update_attempter.cc:509] Updating boot flags... Apr 16 03:48:05.998054 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 16 03:48:06.038465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:48:07.012289 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:48:07.028715 systemd-networkd[1502]: docker0: Link UP Apr 16 03:48:07.064346 (kubelet)[2039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:48:07.161444 dockerd[1826]: time="2026-04-16T03:48:07.160421793Z" level=info msg="Loading containers: done." Apr 16 03:48:07.382235 dockerd[1826]: time="2026-04-16T03:48:07.366939200Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 16 03:48:07.382235 dockerd[1826]: time="2026-04-16T03:48:07.381005183Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 16 03:48:07.396477 dockerd[1826]: time="2026-04-16T03:48:07.394559554Z" level=info msg="Initializing buildkit" Apr 16 03:48:07.701025 kubelet[2039]: E0416 03:48:07.699158 2039 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:48:07.757804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:48:07.758197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:48:07.759249 systemd[1]: kubelet.service: Consumed 585ms CPU time, 109.9M memory peak. Apr 16 03:48:09.457324 dockerd[1826]: time="2026-04-16T03:48:09.451052139Z" level=info msg="Completed buildkit initialization" Apr 16 03:48:09.566484 dockerd[1826]: time="2026-04-16T03:48:09.530877894Z" level=info msg="Daemon has completed initialization" Apr 16 03:48:09.566484 dockerd[1826]: time="2026-04-16T03:48:09.533872862Z" level=info msg="API listen on /run/docker.sock" Apr 16 03:48:09.567728 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 16 03:48:17.905818 containerd[1575]: time="2026-04-16T03:48:17.899326163Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 16 03:48:17.908001 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 16 03:48:17.910573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:48:20.570184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:48:20.642012 (kubelet)[2099]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:48:21.238637 kubelet[2099]: E0416 03:48:21.238401 2099 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:48:21.244264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3913269904.mount: Deactivated successfully. Apr 16 03:48:21.245855 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:48:21.246040 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:48:21.246661 systemd[1]: kubelet.service: Consumed 1.002s CPU time, 109.3M memory peak. Apr 16 03:48:31.514733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Apr 16 03:48:31.678434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:48:35.472324 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:48:35.617946 (kubelet)[2170]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:48:36.572929 kubelet[2170]: E0416 03:48:36.572396 2170 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:48:36.578921 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:48:36.579248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:48:36.580010 systemd[1]: kubelet.service: Consumed 1.322s CPU time, 110.7M memory peak. Apr 16 03:48:46.692532 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Apr 16 03:48:46.895714 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:48:49.907598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:48:50.063945 (kubelet)[2189]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:48:51.402935 containerd[1575]: time="2026-04-16T03:48:51.402182486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:48:51.435649 kubelet[2189]: E0416 03:48:51.403153 2189 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:48:51.676433 containerd[1575]: time="2026-04-16T03:48:51.466906132Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27578861" Apr 16 03:48:51.676433 containerd[1575]: time="2026-04-16T03:48:51.568758805Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:48:51.664913 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:48:51.675698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:48:51.707208 systemd[1]: kubelet.service: Consumed 1.087s CPU time, 110.2M memory peak. Apr 16 03:48:51.712962 containerd[1575]: time="2026-04-16T03:48:51.707612016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:48:51.769753 containerd[1575]: time="2026-04-16T03:48:51.754512901Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 33.854119245s" Apr 16 03:48:51.769753 containerd[1575]: time="2026-04-16T03:48:51.758194711Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 16 03:48:51.808399 containerd[1575]: time="2026-04-16T03:48:51.806930988Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 16 03:49:01.987544 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Apr 16 03:49:02.048797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:49:04.852609 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:49:05.109025 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:49:06.246430 kubelet[2210]: E0416 03:49:06.245760 2210 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:49:06.319412 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:49:06.319742 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:49:06.329243 systemd[1]: kubelet.service: Consumed 1.358s CPU time, 109.2M memory peak. Apr 16 03:49:07.555446 containerd[1575]: time="2026-04-16T03:49:07.542890435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:49:07.555446 containerd[1575]: time="2026-04-16T03:49:07.578631391Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451591" Apr 16 03:49:07.679897 containerd[1575]: time="2026-04-16T03:49:07.674236200Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:49:07.779517 containerd[1575]: time="2026-04-16T03:49:07.772782294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:49:07.779517 containerd[1575]: time="2026-04-16T03:49:07.780997392Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 15.973806779s" Apr 16 03:49:07.782645 containerd[1575]: time="2026-04-16T03:49:07.781250534Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 16 03:49:07.783468 containerd[1575]: time="2026-04-16T03:49:07.782853504Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 16 03:49:16.419064 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Apr 16 03:49:16.506366 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:49:20.293581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:49:20.366366 (kubelet)[2230]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:49:21.420120 containerd[1575]: time="2026-04-16T03:49:21.403797978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:49:21.448807 containerd[1575]: time="2026-04-16T03:49:21.441901314Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555222" Apr 16 03:49:21.503442 containerd[1575]: time="2026-04-16T03:49:21.503287232Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:49:21.547513 kubelet[2230]: E0416 03:49:21.518327 2230 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:49:21.605765 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:49:21.606275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:49:21.633571 containerd[1575]: time="2026-04-16T03:49:21.633024895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:49:21.644525 systemd[1]: kubelet.service: Consumed 1.573s CPU time, 110.8M memory peak. Apr 16 03:49:21.670027 containerd[1575]: time="2026-04-16T03:49:21.663598438Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 13.880699056s" Apr 16 03:49:21.670027 containerd[1575]: time="2026-04-16T03:49:21.663678552Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 16 03:49:21.674982 containerd[1575]: time="2026-04-16T03:49:21.673990775Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 16 03:49:31.660030 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Apr 16 03:49:31.742243 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:49:34.766606 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:49:35.003269 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:49:35.860302 kubelet[2251]: E0416 03:49:35.859705 2251 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:49:35.892649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:49:35.893252 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:49:35.913455 systemd[1]: kubelet.service: Consumed 1.224s CPU time, 111.4M memory peak. Apr 16 03:49:36.020471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1695519971.mount: Deactivated successfully. Apr 16 03:49:41.573427 containerd[1575]: time="2026-04-16T03:49:41.571580963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:49:41.647156 containerd[1575]: time="2026-04-16T03:49:41.644569584Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699819" Apr 16 03:49:41.647156 containerd[1575]: time="2026-04-16T03:49:41.646537667Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:49:41.677932 containerd[1575]: time="2026-04-16T03:49:41.676599668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:49:41.710616 containerd[1575]: time="2026-04-16T03:49:41.679689356Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 20.005421461s" Apr 16 03:49:41.710616 containerd[1575]: time="2026-04-16T03:49:41.679747474Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 16 03:49:41.710616 containerd[1575]: time="2026-04-16T03:49:41.708285116Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 16 03:49:45.136380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3212275469.mount: Deactivated successfully. Apr 16 03:49:45.902954 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Apr 16 03:49:45.948422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:49:48.174718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:49:48.392925 (kubelet)[2283]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:49:49.223486 kubelet[2283]: E0416 03:49:49.222639 2283 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:49:49.235877 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:49:49.237680 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:49:49.238663 systemd[1]: kubelet.service: Consumed 1.062s CPU time, 110.7M memory peak. Apr 16 03:49:59.506437 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 11. Apr 16 03:49:59.516931 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:50:03.360629 containerd[1575]: time="2026-04-16T03:50:03.352858532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:50:03.360629 containerd[1575]: time="2026-04-16T03:50:03.371680839Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 16 03:50:03.382926 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:50:03.391393 containerd[1575]: time="2026-04-16T03:50:03.388917299Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:50:03.426003 containerd[1575]: time="2026-04-16T03:50:03.420832959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:50:03.448904 containerd[1575]: time="2026-04-16T03:50:03.448405485Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 21.739907484s" Apr 16 03:50:03.448904 containerd[1575]: time="2026-04-16T03:50:03.448553739Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 16 03:50:03.460859 containerd[1575]: time="2026-04-16T03:50:03.456573647Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 16 03:50:03.545019 (kubelet)[2346]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:50:04.710253 kubelet[2346]: E0416 03:50:04.708961 2346 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:50:04.759499 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:50:04.760363 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:50:04.761989 systemd[1]: kubelet.service: Consumed 1.400s CPU time, 110.4M memory peak. Apr 16 03:50:06.178126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount627675877.mount: Deactivated successfully. Apr 16 03:50:06.424251 containerd[1575]: time="2026-04-16T03:50:06.413875953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:50:06.424251 containerd[1575]: time="2026-04-16T03:50:06.433408645Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 16 03:50:06.507360 containerd[1575]: time="2026-04-16T03:50:06.489789879Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:50:06.603489 containerd[1575]: time="2026-04-16T03:50:06.592911312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:50:06.942737 containerd[1575]: time="2026-04-16T03:50:06.905214439Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 3.448477102s" Apr 16 03:50:06.942737 containerd[1575]: time="2026-04-16T03:50:06.905474412Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 16 03:50:06.942737 containerd[1575]: time="2026-04-16T03:50:06.934870017Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 16 03:50:16.625954 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 12. Apr 16 03:50:17.232829 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:50:21.754584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2996446122.mount: Deactivated successfully. Apr 16 03:50:24.725991 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:50:25.546436 (kubelet)[2378]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:50:28.186772 kubelet[2378]: E0416 03:50:28.186302 2378 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:50:28.246134 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:50:28.246411 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:50:28.282285 systemd[1]: kubelet.service: Consumed 4.566s CPU time, 110.6M memory peak. Apr 16 03:50:38.708422 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 13. Apr 16 03:50:38.903351 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:50:43.558899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:50:43.621079 (kubelet)[2432]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:50:44.611317 kubelet[2432]: E0416 03:50:44.605828 2432 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:50:44.668775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:50:44.669047 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:50:44.679574 systemd[1]: kubelet.service: Consumed 2.554s CPU time, 110.6M memory peak. Apr 16 03:50:49.398031 containerd[1575]: time="2026-04-16T03:50:49.396287368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:50:49.464602 containerd[1575]: time="2026-04-16T03:50:49.405224413Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643979" Apr 16 03:50:49.467924 containerd[1575]: time="2026-04-16T03:50:49.467807419Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:50:49.484000 containerd[1575]: time="2026-04-16T03:50:49.470640387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:50:49.484000 containerd[1575]: time="2026-04-16T03:50:49.478267312Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 42.543113059s" Apr 16 03:50:49.484000 containerd[1575]: time="2026-04-16T03:50:49.478904209Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 16 03:50:54.853987 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 14. Apr 16 03:50:55.012436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:50:58.796551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:50:58.946480 (kubelet)[2487]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 16 03:50:59.524248 kubelet[2487]: E0416 03:50:59.516926 2487 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 16 03:50:59.543409 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 16 03:50:59.544500 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 16 03:50:59.547727 systemd[1]: kubelet.service: Consumed 1.225s CPU time, 110.5M memory peak. Apr 16 03:51:02.529176 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:51:02.529555 systemd[1]: kubelet.service: Consumed 1.225s CPU time, 110.5M memory peak. Apr 16 03:51:02.748073 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:51:03.342419 systemd[1]: Reload requested from client PID 2504 ('systemctl') (unit session-7.scope)... Apr 16 03:51:03.342475 systemd[1]: Reloading... Apr 16 03:51:04.648137 zram_generator::config[2547]: No configuration found. Apr 16 03:51:17.195213 systemd[1]: Reloading finished in 13851 ms. Apr 16 03:51:18.334789 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 16 03:51:18.334913 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 16 03:51:18.370405 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:51:18.371175 systemd[1]: kubelet.service: Consumed 553ms CPU time, 98.4M memory peak. Apr 16 03:51:18.499462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:51:22.674379 update_engine[1555]: I20260416 03:51:22.669888 1555 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 16 03:51:22.688240 update_engine[1555]: I20260416 03:51:22.685599 1555 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 16 03:51:22.688240 update_engine[1555]: I20260416 03:51:22.687204 1555 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 16 03:51:22.688240 update_engine[1555]: I20260416 03:51:22.687965 1555 omaha_request_params.cc:62] Current group set to stable Apr 16 03:51:22.688443 update_engine[1555]: I20260416 03:51:22.688268 1555 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 16 03:51:22.688443 update_engine[1555]: I20260416 03:51:22.688283 1555 update_attempter.cc:643] Scheduling an action processor start. Apr 16 03:51:22.688443 update_engine[1555]: I20260416 03:51:22.688302 1555 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 03:51:22.688443 update_engine[1555]: I20260416 03:51:22.688331 1555 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 16 03:51:22.688443 update_engine[1555]: I20260416 03:51:22.688418 1555 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 03:51:22.688443 update_engine[1555]: I20260416 03:51:22.688426 1555 omaha_request_action.cc:272] Request: Apr 16 03:51:22.688443 update_engine[1555]: Apr 16 03:51:22.688443 update_engine[1555]: Apr 16 03:51:22.688443 update_engine[1555]: Apr 16 03:51:22.688443 update_engine[1555]: Apr 16 03:51:22.688443 update_engine[1555]: Apr 16 03:51:22.688443 update_engine[1555]: Apr 16 03:51:22.688443 update_engine[1555]: Apr 16 03:51:22.688443 update_engine[1555]: Apr 16 03:51:22.688443 update_engine[1555]: I20260416 03:51:22.688434 1555 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 03:51:22.689690 locksmithd[1641]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 16 03:51:22.690022 update_engine[1555]: I20260416 03:51:22.689780 1555 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 03:51:22.756291 update_engine[1555]: I20260416 03:51:22.751994 1555 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 03:51:22.773603 update_engine[1555]: E20260416 03:51:22.768998 1555 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 03:51:22.790942 update_engine[1555]: I20260416 03:51:22.779570 1555 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 16 03:51:25.871803 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:51:26.218042 (kubelet)[2597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 03:51:29.066819 kubelet[2597]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 03:51:32.685810 update_engine[1555]: I20260416 03:51:32.653784 1555 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 03:51:32.685810 update_engine[1555]: I20260416 03:51:32.672034 1555 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 03:51:32.761717 update_engine[1555]: I20260416 03:51:32.696872 1555 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 03:51:32.761717 update_engine[1555]: E20260416 03:51:32.749560 1555 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 03:51:32.761717 update_engine[1555]: I20260416 03:51:32.750545 1555 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 16 03:51:32.782758 kubelet[2597]: I0416 03:51:32.782511 2597 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 16 03:51:32.782758 kubelet[2597]: I0416 03:51:32.782734 2597 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 03:51:32.806235 kubelet[2597]: I0416 03:51:32.783044 2597 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 03:51:32.806235 kubelet[2597]: I0416 03:51:32.783307 2597 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 03:51:32.806235 kubelet[2597]: I0416 03:51:32.788918 2597 server.go:951] "Client rotation is on, will bootstrap in background" Apr 16 03:51:33.235133 kubelet[2597]: E0416 03:51:33.234360 2597 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 03:51:33.235133 kubelet[2597]: I0416 03:51:33.234434 2597 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 03:51:33.970012 kubelet[2597]: I0416 03:51:33.894746 2597 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 03:51:34.722935 kubelet[2597]: I0416 03:51:34.721996 2597 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 03:51:34.737955 kubelet[2597]: I0416 03:51:34.727328 2597 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 03:51:34.804072 kubelet[2597]: I0416 03:51:34.750032 2597 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 03:51:34.804072 kubelet[2597]: I0416 03:51:34.804630 2597 topology_manager.go:143] "Creating topology manager with none policy" Apr 16 03:51:34.804072 kubelet[2597]: I0416 03:51:34.804976 2597 container_manager_linux.go:308] "Creating device plugin manager" Apr 16 03:51:35.015983 kubelet[2597]: I0416 03:51:34.825817 2597 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 03:51:35.019069 kubelet[2597]: I0416 03:51:35.018706 2597 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 16 03:51:35.019917 kubelet[2597]: I0416 03:51:35.019849 2597 kubelet.go:482] "Attempting to sync node with API server" Apr 16 03:51:35.019917 kubelet[2597]: I0416 03:51:35.019902 2597 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 03:51:35.020049 kubelet[2597]: I0416 03:51:35.020040 2597 kubelet.go:394] "Adding apiserver pod source" Apr 16 03:51:35.020076 kubelet[2597]: I0416 03:51:35.020060 2597 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 03:51:35.158027 kubelet[2597]: I0416 03:51:35.156570 2597 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 03:51:35.223597 kubelet[2597]: I0416 03:51:35.215363 2597 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 03:51:35.223597 kubelet[2597]: I0416 03:51:35.220834 2597 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 03:51:35.260796 kubelet[2597]: W0416 03:51:35.233903 2597 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 16 03:51:35.499576 kubelet[2597]: I0416 03:51:35.497314 2597 server.go:1257] "Started kubelet" Apr 16 03:51:35.499576 kubelet[2597]: I0416 03:51:35.499418 2597 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 03:51:35.501367 kubelet[2597]: E0416 03:51:35.501175 2597 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 03:51:35.501587 kubelet[2597]: I0416 03:51:35.501438 2597 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 03:51:35.501623 kubelet[2597]: I0416 03:51:35.501602 2597 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 03:51:35.506264 kubelet[2597]: I0416 03:51:35.503389 2597 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 03:51:35.521619 kubelet[2597]: I0416 03:51:35.521244 2597 server.go:317] "Adding debug handlers to kubelet server" Apr 16 03:51:35.538041 kubelet[2597]: I0416 03:51:35.538002 2597 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 16 03:51:35.545852 kubelet[2597]: I0416 03:51:35.545742 2597 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 03:51:35.548009 kubelet[2597]: I0416 03:51:35.547952 2597 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 16 03:51:35.594369 kubelet[2597]: E0416 03:51:35.574750 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:35.626825 kubelet[2597]: I0416 03:51:35.626057 2597 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 03:51:35.635505 kubelet[2597]: I0416 03:51:35.635372 2597 reconciler.go:29] "Reconciler: start to sync state" Apr 16 03:51:35.636621 kubelet[2597]: E0416 03:51:35.636456 2597 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="200ms" Apr 16 03:51:35.641923 kubelet[2597]: E0416 03:51:35.641032 2597 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 03:51:35.679494 kubelet[2597]: E0416 03:51:35.641637 2597 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6b9e4d2d8d786 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 03:51:35.485347718 +0000 UTC m=+8.871525509,LastTimestamp:2026-04-16 03:51:35.485347718 +0000 UTC m=+8.871525509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 03:51:35.737337 kubelet[2597]: E0416 03:51:35.729074 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:35.765230 kubelet[2597]: I0416 03:51:35.761813 2597 factory.go:223] Registration of the containerd container factory successfully Apr 16 03:51:35.765230 kubelet[2597]: I0416 03:51:35.762013 2597 factory.go:223] Registration of the systemd container factory successfully Apr 16 03:51:35.765617 kubelet[2597]: I0416 03:51:35.765332 2597 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 03:51:35.834859 kubelet[2597]: E0416 03:51:35.830882 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:35.901034 kubelet[2597]: E0416 03:51:35.900410 2597 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="400ms" Apr 16 03:51:35.975804 kubelet[2597]: E0416 03:51:35.938567 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:36.044794 kubelet[2597]: E0416 03:51:36.042403 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:36.157703 kubelet[2597]: E0416 03:51:36.155983 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:36.266241 kubelet[2597]: E0416 03:51:36.259437 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:36.384468 kubelet[2597]: E0416 03:51:36.369695 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:36.394976 kubelet[2597]: E0416 03:51:36.384504 2597 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="800ms" Apr 16 03:51:36.537265 kubelet[2597]: E0416 03:51:36.529259 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:36.688263 kubelet[2597]: E0416 03:51:36.642745 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:36.746666 kubelet[2597]: E0416 03:51:36.736132 2597 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6b9e4d2d8d786 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 03:51:35.485347718 +0000 UTC m=+8.871525509,LastTimestamp:2026-04-16 03:51:35.485347718 +0000 UTC m=+8.871525509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 03:51:36.746666 kubelet[2597]: E0416 03:51:36.746069 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:36.778027 kubelet[2597]: I0416 03:51:36.777808 2597 cpu_manager.go:225] "Starting" policy="none" Apr 16 03:51:36.778027 kubelet[2597]: I0416 03:51:36.777841 2597 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 16 03:51:36.778027 kubelet[2597]: I0416 03:51:36.777912 2597 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 16 03:51:36.816701 kubelet[2597]: I0416 03:51:36.794166 2597 policy_none.go:50] "Start" Apr 16 03:51:36.816701 kubelet[2597]: I0416 03:51:36.815968 2597 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 03:51:36.816701 kubelet[2597]: I0416 03:51:36.816172 2597 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 03:51:36.828518 kubelet[2597]: I0416 03:51:36.828282 2597 policy_none.go:44] "Start" Apr 16 03:51:36.864126 kubelet[2597]: E0416 03:51:36.851989 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:36.972078 kubelet[2597]: E0416 03:51:36.971704 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:37.058020 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 16 03:51:37.233428 kubelet[2597]: E0416 03:51:37.214388 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:37.233428 kubelet[2597]: E0416 03:51:37.232230 2597 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="1.6s" Apr 16 03:51:37.234509 kubelet[2597]: I0416 03:51:37.234057 2597 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 03:51:37.245588 kubelet[2597]: I0416 03:51:37.245436 2597 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 03:51:37.246191 kubelet[2597]: I0416 03:51:37.245932 2597 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 16 03:51:37.246696 kubelet[2597]: I0416 03:51:37.246276 2597 kubelet.go:2501] "Starting kubelet main sync loop" Apr 16 03:51:37.246696 kubelet[2597]: E0416 03:51:37.246443 2597 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 03:51:37.365464 kubelet[2597]: E0416 03:51:37.353953 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:37.379059 kubelet[2597]: E0416 03:51:37.353893 2597 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 03:51:37.380016 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 16 03:51:37.479849 kubelet[2597]: E0416 03:51:37.479410 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:37.591708 kubelet[2597]: E0416 03:51:37.586017 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:37.591708 kubelet[2597]: E0416 03:51:37.586008 2597 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 03:51:37.761302 kubelet[2597]: E0416 03:51:37.688527 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:38.015794 kubelet[2597]: E0416 03:51:37.793980 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:38.015794 kubelet[2597]: E0416 03:51:37.986678 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:38.015794 kubelet[2597]: E0416 03:51:37.986760 2597 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 03:51:38.185004 kubelet[2597]: E0416 03:51:38.168644 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:38.283198 kubelet[2597]: E0416 03:51:38.273675 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:38.385628 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 16 03:51:38.482964 kubelet[2597]: E0416 03:51:38.434623 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:38.599054 kubelet[2597]: E0416 03:51:38.576414 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:38.699843 kubelet[2597]: E0416 03:51:38.691564 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:38.902327 kubelet[2597]: E0416 03:51:38.852525 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:38.902327 kubelet[2597]: E0416 03:51:38.861275 2597 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 03:51:38.939602 kubelet[2597]: E0416 03:51:38.911581 2597 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="3.2s" Apr 16 03:51:39.059593 kubelet[2597]: E0416 03:51:39.047989 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:39.129471 kubelet[2597]: E0416 03:51:39.128477 2597 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 03:51:39.272069 kubelet[2597]: E0416 03:51:39.166873 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:39.272069 kubelet[2597]: I0416 03:51:39.253057 2597 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 16 03:51:39.285326 kubelet[2597]: I0416 03:51:39.272201 2597 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 03:51:39.285326 kubelet[2597]: E0416 03:51:39.277510 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:51:39.490443 kubelet[2597]: I0416 03:51:39.341653 2597 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 16 03:51:39.666892 kubelet[2597]: E0416 03:51:39.588041 2597 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 03:51:39.893624 kubelet[2597]: E0416 03:51:39.718031 2597 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:51:39.945377 kubelet[2597]: I0416 03:51:39.915799 2597 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 03:51:39.968357 kubelet[2597]: E0416 03:51:39.961023 2597 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 03:51:40.114894 kubelet[2597]: E0416 03:51:40.085805 2597 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Apr 16 03:51:40.629704 kubelet[2597]: I0416 03:51:40.623180 2597 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 03:51:40.814039 kubelet[2597]: I0416 03:51:40.628518 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e36b9f0b70aad9002c2c8f9f1da92c42-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e36b9f0b70aad9002c2c8f9f1da92c42\") " pod="kube-system/kube-apiserver-localhost" Apr 16 03:51:40.814039 kubelet[2597]: I0416 03:51:40.639629 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e36b9f0b70aad9002c2c8f9f1da92c42-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e36b9f0b70aad9002c2c8f9f1da92c42\") " pod="kube-system/kube-apiserver-localhost" Apr 16 03:51:40.814039 kubelet[2597]: I0416 03:51:40.674919 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e36b9f0b70aad9002c2c8f9f1da92c42-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e36b9f0b70aad9002c2c8f9f1da92c42\") " pod="kube-system/kube-apiserver-localhost" Apr 16 03:51:40.814039 kubelet[2597]: E0416 03:51:40.737598 2597 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Apr 16 03:51:41.241966 kubelet[2597]: I0416 03:51:41.239550 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:51:41.241966 kubelet[2597]: I0416 03:51:41.239747 2597 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 03:51:41.241966 kubelet[2597]: I0416 03:51:41.240000 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:51:41.241966 kubelet[2597]: I0416 03:51:41.240033 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:51:41.241966 kubelet[2597]: I0416 03:51:41.240053 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:51:41.241966 kubelet[2597]: I0416 03:51:41.241553 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:51:41.353195 kubelet[2597]: E0416 03:51:41.242678 2597 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Apr 16 03:51:41.697119 kubelet[2597]: I0416 03:51:41.661446 2597 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 16 03:51:42.137767 kubelet[2597]: E0416 03:51:42.133393 2597 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="6.4s" Apr 16 03:51:42.140680 kubelet[2597]: I0416 03:51:42.140647 2597 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 03:51:42.151785 kubelet[2597]: E0416 03:51:42.148602 2597 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Apr 16 03:51:42.569895 systemd[1]: Created slice kubepods-burstable-pode36b9f0b70aad9002c2c8f9f1da92c42.slice - libcontainer container kubepods-burstable-pode36b9f0b70aad9002c2c8f9f1da92c42.slice. Apr 16 03:51:42.636137 update_engine[1555]: I20260416 03:51:42.634496 1555 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 03:51:42.671953 update_engine[1555]: I20260416 03:51:42.656983 1555 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 03:51:42.671953 update_engine[1555]: I20260416 03:51:42.658747 1555 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 03:51:42.680245 update_engine[1555]: E20260416 03:51:42.679179 1555 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 03:51:42.680245 update_engine[1555]: I20260416 03:51:42.679876 1555 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 16 03:51:42.866304 kubelet[2597]: E0416 03:51:42.865212 2597 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:51:42.869519 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 16 03:51:42.955817 containerd[1575]: time="2026-04-16T03:51:42.950678591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e36b9f0b70aad9002c2c8f9f1da92c42,Namespace:kube-system,Attempt:0,}" Apr 16 03:51:42.975424 kubelet[2597]: E0416 03:51:42.952361 2597 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:51:42.978939 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 16 03:51:42.983955 containerd[1575]: time="2026-04-16T03:51:42.982976943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,}" Apr 16 03:51:43.166210 kubelet[2597]: E0416 03:51:43.159251 2597 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:51:43.254829 containerd[1575]: time="2026-04-16T03:51:43.254479057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,}" Apr 16 03:51:43.943887 kubelet[2597]: I0416 03:51:43.908008 2597 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 03:51:43.991852 kubelet[2597]: E0416 03:51:43.971012 2597 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Apr 16 03:51:46.787896 kubelet[2597]: E0416 03:51:46.774952 2597 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.115:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.115:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a6b9e4d2d8d786 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 03:51:35.485347718 +0000 UTC m=+8.871525509,LastTimestamp:2026-04-16 03:51:35.485347718 +0000 UTC m=+8.871525509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 03:51:47.192619 kubelet[2597]: I0416 03:51:47.190260 2597 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 03:51:47.233376 kubelet[2597]: E0416 03:51:47.233234 2597 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Apr 16 03:51:47.413517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3163721440.mount: Deactivated successfully. Apr 16 03:51:47.592907 containerd[1575]: time="2026-04-16T03:51:47.589957895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 03:51:47.632489 containerd[1575]: time="2026-04-16T03:51:47.630386004Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 16 03:51:47.764284 containerd[1575]: time="2026-04-16T03:51:47.763323577Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 03:51:47.776398 containerd[1575]: time="2026-04-16T03:51:47.775628510Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 03:51:47.839424 containerd[1575]: time="2026-04-16T03:51:47.832452673Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 03:51:47.885020 containerd[1575]: time="2026-04-16T03:51:47.866654594Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 03:51:48.006648 containerd[1575]: time="2026-04-16T03:51:47.989951478Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 16 03:51:48.026970 containerd[1575]: time="2026-04-16T03:51:48.026323236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 16 03:51:48.153927 containerd[1575]: time="2026-04-16T03:51:48.090904912Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 5.113714189s" Apr 16 03:51:48.222782 containerd[1575]: time="2026-04-16T03:51:48.205614907Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 5.190718976s" Apr 16 03:51:48.391065 containerd[1575]: time="2026-04-16T03:51:48.387113309Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 5.059787352s" Apr 16 03:51:48.794038 kubelet[2597]: E0416 03:51:48.782431 2597 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.115:6443: connect: connection refused" interval="7s" Apr 16 03:51:48.912932 kubelet[2597]: E0416 03:51:48.897625 2597 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.115:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.115:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 16 03:51:49.747658 containerd[1575]: time="2026-04-16T03:51:49.738853511Z" level=info msg="connecting to shim 6ea89118d7b9def11008b24c28723cda88641d52fcd63e992e9e394404b578a7" address="unix:///run/containerd/s/0d76f539d525c9b611b70b0212a2b63ffb161cf804e58d967d0663f0c4b44310" namespace=k8s.io protocol=ttrpc version=3 Apr 16 03:51:49.965945 kubelet[2597]: E0416 03:51:49.816708 2597 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:51:50.306767 containerd[1575]: time="2026-04-16T03:51:50.304634099Z" level=info msg="connecting to shim b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f" address="unix:///run/containerd/s/64fc1d346c666396b4a6f4eda52f8f58d8abeacdc8da519fac54d1b45f3029a3" namespace=k8s.io protocol=ttrpc version=3 Apr 16 03:51:51.272648 containerd[1575]: time="2026-04-16T03:51:51.270769571Z" level=info msg="connecting to shim 02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81" address="unix:///run/containerd/s/b0f2c5cfffdebf676e7ed85c3328df6a87775c2b04620a5f0b47a494ee449f34" namespace=k8s.io protocol=ttrpc version=3 Apr 16 03:51:52.671388 update_engine[1555]: I20260416 03:51:52.636255 1555 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 03:51:53.054403 update_engine[1555]: I20260416 03:51:52.689653 1555 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 03:51:53.054403 update_engine[1555]: I20260416 03:51:52.831462 1555 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 03:51:53.054403 update_engine[1555]: E20260416 03:51:52.854496 1555 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 03:51:53.054403 update_engine[1555]: I20260416 03:51:52.880079 1555 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 03:51:53.054403 update_engine[1555]: I20260416 03:51:52.938910 1555 omaha_request_action.cc:617] Omaha request response: Apr 16 03:51:53.054403 update_engine[1555]: E20260416 03:51:52.979914 1555 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 16 03:51:53.054403 update_engine[1555]: I20260416 03:51:53.020736 1555 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 16 03:51:53.054403 update_engine[1555]: I20260416 03:51:53.039928 1555 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 03:51:53.054403 update_engine[1555]: I20260416 03:51:53.041564 1555 update_attempter.cc:306] Processing Done. Apr 16 03:51:53.054403 update_engine[1555]: E20260416 03:51:53.041664 1555 update_attempter.cc:619] Update failed. Apr 16 03:51:53.054403 update_engine[1555]: I20260416 03:51:53.041672 1555 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 16 03:51:53.054403 update_engine[1555]: I20260416 03:51:53.041678 1555 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 16 03:51:53.054403 update_engine[1555]: I20260416 03:51:53.041685 1555 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 16 03:51:53.054403 update_engine[1555]: I20260416 03:51:53.041796 1555 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 16 03:51:53.054403 update_engine[1555]: I20260416 03:51:53.041836 1555 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 16 03:51:53.054403 update_engine[1555]: I20260416 03:51:53.041846 1555 omaha_request_action.cc:272] Request: Apr 16 03:51:53.054403 update_engine[1555]: Apr 16 03:51:53.054403 update_engine[1555]: Apr 16 03:51:53.070659 update_engine[1555]: Apr 16 03:51:53.070659 update_engine[1555]: Apr 16 03:51:53.070659 update_engine[1555]: Apr 16 03:51:53.070659 update_engine[1555]: Apr 16 03:51:53.070659 update_engine[1555]: I20260416 03:51:53.041853 1555 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 16 03:51:53.070659 update_engine[1555]: I20260416 03:51:53.048965 1555 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 16 03:51:53.070659 update_engine[1555]: I20260416 03:51:53.070436 1555 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 16 03:51:53.088045 locksmithd[1641]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 16 03:51:53.158689 update_engine[1555]: E20260416 03:51:53.094951 1555 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 16 03:51:53.158689 update_engine[1555]: I20260416 03:51:53.156713 1555 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 16 03:51:53.158689 update_engine[1555]: I20260416 03:51:53.157627 1555 omaha_request_action.cc:617] Omaha request response: Apr 16 03:51:53.158689 update_engine[1555]: I20260416 03:51:53.157848 1555 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 03:51:53.158689 update_engine[1555]: I20260416 03:51:53.157860 1555 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 16 03:51:53.158689 update_engine[1555]: I20260416 03:51:53.157866 1555 update_attempter.cc:306] Processing Done. Apr 16 03:51:53.158689 update_engine[1555]: I20260416 03:51:53.157876 1555 update_attempter.cc:310] Error event sent. Apr 16 03:51:53.158689 update_engine[1555]: I20260416 03:51:53.157931 1555 update_check_scheduler.cc:74] Next update check in 43m5s Apr 16 03:51:53.209406 locksmithd[1641]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 16 03:51:53.715959 systemd[1]: Started cri-containerd-02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81.scope - libcontainer container 02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81. Apr 16 03:51:53.731937 kubelet[2597]: I0416 03:51:53.726615 2597 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 03:51:53.734613 kubelet[2597]: E0416 03:51:53.734540 2597 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.115:6443/api/v1/nodes\": dial tcp 10.0.0.115:6443: connect: connection refused" node="localhost" Apr 16 03:51:53.885267 systemd[1]: Started cri-containerd-6ea89118d7b9def11008b24c28723cda88641d52fcd63e992e9e394404b578a7.scope - libcontainer container 6ea89118d7b9def11008b24c28723cda88641d52fcd63e992e9e394404b578a7. Apr 16 03:51:54.073604 systemd[1]: Started cri-containerd-b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f.scope - libcontainer container b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f. Apr 16 03:51:54.757562 containerd[1575]: time="2026-04-16T03:51:54.753858458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81\"" Apr 16 03:51:54.858999 containerd[1575]: time="2026-04-16T03:51:54.852942443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e36b9f0b70aad9002c2c8f9f1da92c42,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ea89118d7b9def11008b24c28723cda88641d52fcd63e992e9e394404b578a7\"" Apr 16 03:51:54.858999 containerd[1575]: time="2026-04-16T03:51:54.858709017Z" level=info msg="CreateContainer within sandbox \"02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 16 03:51:54.873885 containerd[1575]: time="2026-04-16T03:51:54.873443188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f\"" Apr 16 03:51:54.880933 containerd[1575]: time="2026-04-16T03:51:54.880571682Z" level=info msg="CreateContainer within sandbox \"6ea89118d7b9def11008b24c28723cda88641d52fcd63e992e9e394404b578a7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 16 03:51:54.999252 containerd[1575]: time="2026-04-16T03:51:54.998376201Z" level=info msg="CreateContainer within sandbox \"b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 16 03:51:55.064028 containerd[1575]: time="2026-04-16T03:51:55.062678322Z" level=info msg="Container fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a: CDI devices from CRI Config.CDIDevices: []" Apr 16 03:51:55.063257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2687053124.mount: Deactivated successfully. Apr 16 03:51:55.070296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3473398424.mount: Deactivated successfully. Apr 16 03:51:55.070426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount356141054.mount: Deactivated successfully. Apr 16 03:51:55.078586 containerd[1575]: time="2026-04-16T03:51:55.074521446Z" level=info msg="Container d820b403e7cd7162e9bcf2f3b3499edded1d3f3f4df3ec4a740cd260ca5f3851: CDI devices from CRI Config.CDIDevices: []" Apr 16 03:51:55.093257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1861604187.mount: Deactivated successfully. Apr 16 03:51:55.147600 containerd[1575]: time="2026-04-16T03:51:55.146666011Z" level=info msg="Container bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad: CDI devices from CRI Config.CDIDevices: []" Apr 16 03:51:55.183542 containerd[1575]: time="2026-04-16T03:51:55.182801359Z" level=info msg="CreateContainer within sandbox \"02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a\"" Apr 16 03:51:55.183542 containerd[1575]: time="2026-04-16T03:51:55.183155944Z" level=info msg="CreateContainer within sandbox \"6ea89118d7b9def11008b24c28723cda88641d52fcd63e992e9e394404b578a7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d820b403e7cd7162e9bcf2f3b3499edded1d3f3f4df3ec4a740cd260ca5f3851\"" Apr 16 03:51:55.188185 containerd[1575]: time="2026-04-16T03:51:55.187296233Z" level=info msg="StartContainer for \"fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a\"" Apr 16 03:51:55.188185 containerd[1575]: time="2026-04-16T03:51:55.187364046Z" level=info msg="StartContainer for \"d820b403e7cd7162e9bcf2f3b3499edded1d3f3f4df3ec4a740cd260ca5f3851\"" Apr 16 03:51:55.188185 containerd[1575]: time="2026-04-16T03:51:55.190894737Z" level=info msg="CreateContainer within sandbox \"b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad\"" Apr 16 03:51:55.192553 containerd[1575]: time="2026-04-16T03:51:55.192472283Z" level=info msg="connecting to shim fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a" address="unix:///run/containerd/s/b0f2c5cfffdebf676e7ed85c3328df6a87775c2b04620a5f0b47a494ee449f34" protocol=ttrpc version=3 Apr 16 03:51:55.193549 containerd[1575]: time="2026-04-16T03:51:55.193483472Z" level=info msg="connecting to shim d820b403e7cd7162e9bcf2f3b3499edded1d3f3f4df3ec4a740cd260ca5f3851" address="unix:///run/containerd/s/0d76f539d525c9b611b70b0212a2b63ffb161cf804e58d967d0663f0c4b44310" protocol=ttrpc version=3 Apr 16 03:51:55.194440 containerd[1575]: time="2026-04-16T03:51:55.194387189Z" level=info msg="StartContainer for \"bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad\"" Apr 16 03:51:55.196691 containerd[1575]: time="2026-04-16T03:51:55.196592596Z" level=info msg="connecting to shim bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad" address="unix:///run/containerd/s/64fc1d346c666396b4a6f4eda52f8f58d8abeacdc8da519fac54d1b45f3029a3" protocol=ttrpc version=3 Apr 16 03:51:55.294497 systemd[1]: Started cri-containerd-bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad.scope - libcontainer container bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad. Apr 16 03:51:55.296130 systemd[1]: Started cri-containerd-d820b403e7cd7162e9bcf2f3b3499edded1d3f3f4df3ec4a740cd260ca5f3851.scope - libcontainer container d820b403e7cd7162e9bcf2f3b3499edded1d3f3f4df3ec4a740cd260ca5f3851. Apr 16 03:51:55.303347 systemd[1]: Started cri-containerd-fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a.scope - libcontainer container fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a. Apr 16 03:51:55.605889 containerd[1575]: time="2026-04-16T03:51:55.602631869Z" level=info msg="StartContainer for \"fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a\" returns successfully" Apr 16 03:51:55.625806 containerd[1575]: time="2026-04-16T03:51:55.624673766Z" level=info msg="StartContainer for \"bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad\" returns successfully" Apr 16 03:51:55.625806 containerd[1575]: time="2026-04-16T03:51:55.629140143Z" level=info msg="StartContainer for \"d820b403e7cd7162e9bcf2f3b3499edded1d3f3f4df3ec4a740cd260ca5f3851\" returns successfully" Apr 16 03:51:55.815342 kubelet[2597]: E0416 03:51:55.808845 2597 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:51:55.855614 kubelet[2597]: E0416 03:51:55.854969 2597 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:51:55.860664 kubelet[2597]: E0416 03:51:55.860422 2597 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:51:56.870198 kubelet[2597]: E0416 03:51:56.868031 2597 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:51:56.875672 kubelet[2597]: E0416 03:51:56.873291 2597 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:51:57.490127 kubelet[2597]: E0416 03:51:57.489461 2597 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:51:58.922387 kubelet[2597]: E0416 03:51:58.920492 2597 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:51:58.997578 kubelet[2597]: E0416 03:51:58.994474 2597 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:51:59.587161 kubelet[2597]: E0416 03:51:59.586293 2597 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 16 03:51:59.771910 kubelet[2597]: E0416 03:51:59.766363 2597 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18a6b9e4d2d8d786 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-16 03:51:35.485347718 +0000 UTC m=+8.871525509,LastTimestamp:2026-04-16 03:51:35.485347718 +0000 UTC m=+8.871525509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 16 03:51:59.986493 kubelet[2597]: E0416 03:51:59.985777 2597 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 16 03:52:00.062246 kubelet[2597]: E0416 03:52:00.054874 2597 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 16 03:52:00.517674 kubelet[2597]: E0416 03:52:00.515616 2597 csi_plugin.go:399] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Apr 16 03:52:00.752687 kubelet[2597]: I0416 03:52:00.750226 2597 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 03:52:00.813287 kubelet[2597]: I0416 03:52:00.798860 2597 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 16 03:52:00.813287 kubelet[2597]: E0416 03:52:00.799027 2597 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 16 03:52:00.888306 kubelet[2597]: E0416 03:52:00.887311 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:00.991348 kubelet[2597]: E0416 03:52:00.990219 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:01.103821 kubelet[2597]: E0416 03:52:01.098300 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:01.224972 kubelet[2597]: E0416 03:52:01.208809 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:01.320775 kubelet[2597]: E0416 03:52:01.319620 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:01.421721 kubelet[2597]: E0416 03:52:01.420848 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:01.524381 kubelet[2597]: E0416 03:52:01.522211 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:01.632000 kubelet[2597]: E0416 03:52:01.624900 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:01.750509 kubelet[2597]: E0416 03:52:01.741483 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:01.871303 kubelet[2597]: E0416 03:52:01.865427 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:01.969441 kubelet[2597]: E0416 03:52:01.968353 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:02.078988 kubelet[2597]: E0416 03:52:02.073282 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:02.183503 kubelet[2597]: E0416 03:52:02.182285 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:02.287875 kubelet[2597]: E0416 03:52:02.286651 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:02.398673 kubelet[2597]: E0416 03:52:02.392701 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:02.510562 kubelet[2597]: E0416 03:52:02.508588 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:02.616806 kubelet[2597]: E0416 03:52:02.614984 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:02.729730 kubelet[2597]: E0416 03:52:02.718444 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:02.839591 kubelet[2597]: E0416 03:52:02.832625 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:02.938779 kubelet[2597]: E0416 03:52:02.937475 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:03.039898 kubelet[2597]: E0416 03:52:03.038123 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:03.139870 kubelet[2597]: E0416 03:52:03.138795 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:03.243363 kubelet[2597]: E0416 03:52:03.242327 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:03.346253 kubelet[2597]: E0416 03:52:03.343683 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:03.449427 kubelet[2597]: E0416 03:52:03.448323 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:03.586651 kubelet[2597]: E0416 03:52:03.585519 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:03.718587 kubelet[2597]: E0416 03:52:03.712874 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:03.828037 kubelet[2597]: E0416 03:52:03.820419 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:03.930870 kubelet[2597]: E0416 03:52:03.925736 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:04.041842 kubelet[2597]: E0416 03:52:04.031919 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:04.157230 kubelet[2597]: E0416 03:52:04.153195 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:04.274807 kubelet[2597]: E0416 03:52:04.273133 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:04.384167 kubelet[2597]: E0416 03:52:04.380499 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:04.491491 kubelet[2597]: E0416 03:52:04.486827 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:04.597554 kubelet[2597]: E0416 03:52:04.596713 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:04.707749 kubelet[2597]: E0416 03:52:04.701713 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:04.810294 kubelet[2597]: E0416 03:52:04.808851 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:04.966025 kubelet[2597]: E0416 03:52:04.964327 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:05.069609 kubelet[2597]: E0416 03:52:05.068258 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:05.172298 kubelet[2597]: E0416 03:52:05.171239 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:05.304281 kubelet[2597]: E0416 03:52:05.298987 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:05.404127 kubelet[2597]: E0416 03:52:05.402985 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:05.511463 kubelet[2597]: E0416 03:52:05.506734 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:05.619676 kubelet[2597]: E0416 03:52:05.615168 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:05.720619 kubelet[2597]: E0416 03:52:05.719556 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:05.828217 kubelet[2597]: E0416 03:52:05.825741 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:05.962209 kubelet[2597]: E0416 03:52:05.950910 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:06.069370 kubelet[2597]: E0416 03:52:06.066917 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:06.174279 kubelet[2597]: E0416 03:52:06.172496 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:06.282547 kubelet[2597]: E0416 03:52:06.278738 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:06.401845 kubelet[2597]: E0416 03:52:06.399794 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:06.514588 kubelet[2597]: E0416 03:52:06.513314 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:06.624060 kubelet[2597]: E0416 03:52:06.614582 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:06.723991 kubelet[2597]: E0416 03:52:06.722932 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:06.854478 kubelet[2597]: E0416 03:52:06.850396 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:06.952987 kubelet[2597]: E0416 03:52:06.952540 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:07.056229 kubelet[2597]: E0416 03:52:07.055172 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:07.170555 kubelet[2597]: E0416 03:52:07.163384 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:07.276221 kubelet[2597]: E0416 03:52:07.273066 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:07.379528 kubelet[2597]: E0416 03:52:07.374428 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:07.476501 kubelet[2597]: E0416 03:52:07.476280 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:07.511223 kubelet[2597]: E0416 03:52:07.511069 2597 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:52:07.591316 kubelet[2597]: E0416 03:52:07.589835 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:07.596466 systemd[1]: Reload requested from client PID 2893 ('systemctl') (unit session-7.scope)... Apr 16 03:52:07.596550 systemd[1]: Reloading... Apr 16 03:52:07.691262 kubelet[2597]: E0416 03:52:07.690864 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:07.792999 kubelet[2597]: E0416 03:52:07.792729 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:07.903201 kubelet[2597]: E0416 03:52:07.899958 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:08.011908 kubelet[2597]: E0416 03:52:08.011405 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:08.115771 kubelet[2597]: E0416 03:52:08.115526 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:08.141280 zram_generator::config[2945]: No configuration found. Apr 16 03:52:08.224179 kubelet[2597]: E0416 03:52:08.222137 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:08.329522 kubelet[2597]: E0416 03:52:08.328640 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:08.431936 kubelet[2597]: E0416 03:52:08.430919 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:08.536009 kubelet[2597]: E0416 03:52:08.535577 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:08.644711 kubelet[2597]: E0416 03:52:08.644233 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:08.745456 kubelet[2597]: E0416 03:52:08.744928 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:08.847833 kubelet[2597]: E0416 03:52:08.846765 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:08.947965 kubelet[2597]: E0416 03:52:08.947496 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:08.953599 kubelet[2597]: E0416 03:52:08.953524 2597 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:52:09.046913 kubelet[2597]: E0416 03:52:09.043738 2597 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:52:09.056984 kubelet[2597]: E0416 03:52:09.055974 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:09.204680 kubelet[2597]: E0416 03:52:09.204394 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:09.220315 kubelet[2597]: E0416 03:52:09.219947 2597 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 16 03:52:09.316080 kubelet[2597]: E0416 03:52:09.310372 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:09.422543 kubelet[2597]: E0416 03:52:09.422258 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:09.477938 systemd[1]: Reloading finished in 1880 ms. Apr 16 03:52:09.529153 kubelet[2597]: E0416 03:52:09.527977 2597 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 16 03:52:09.577003 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:52:09.707838 systemd[1]: kubelet.service: Deactivated successfully. Apr 16 03:52:09.709949 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:52:09.710340 systemd[1]: kubelet.service: Consumed 17.084s CPU time, 127.5M memory peak. Apr 16 03:52:09.744204 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 16 03:52:10.667791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 16 03:52:10.845952 (kubelet)[2980]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 16 03:52:11.505948 kubelet[2980]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 16 03:52:11.667408 kubelet[2980]: I0416 03:52:11.655530 2980 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 16 03:52:11.667408 kubelet[2980]: I0416 03:52:11.655758 2980 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 16 03:52:11.667408 kubelet[2980]: I0416 03:52:11.655771 2980 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 16 03:52:11.667408 kubelet[2980]: I0416 03:52:11.655886 2980 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 16 03:52:11.676193 kubelet[2980]: I0416 03:52:11.673193 2980 server.go:951] "Client rotation is on, will bootstrap in background" Apr 16 03:52:11.707563 kubelet[2980]: I0416 03:52:11.704685 2980 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 16 03:52:11.716463 kubelet[2980]: I0416 03:52:11.715762 2980 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 16 03:52:11.820423 kubelet[2980]: I0416 03:52:11.810933 2980 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 16 03:52:11.876380 kubelet[2980]: I0416 03:52:11.875550 2980 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 16 03:52:11.876380 kubelet[2980]: I0416 03:52:11.876079 2980 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 16 03:52:11.876380 kubelet[2980]: I0416 03:52:11.876182 2980 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 16 03:52:11.876380 kubelet[2980]: I0416 03:52:11.876613 2980 topology_manager.go:143] "Creating topology manager with none policy" Apr 16 03:52:11.877632 kubelet[2980]: I0416 03:52:11.876624 2980 container_manager_linux.go:308] "Creating device plugin manager" Apr 16 03:52:11.877632 kubelet[2980]: I0416 03:52:11.876658 2980 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 16 03:52:11.877632 kubelet[2980]: I0416 03:52:11.877064 2980 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 16 03:52:11.877632 kubelet[2980]: I0416 03:52:11.877457 2980 kubelet.go:482] "Attempting to sync node with API server" Apr 16 03:52:11.877632 kubelet[2980]: I0416 03:52:11.877475 2980 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 16 03:52:11.877632 kubelet[2980]: I0416 03:52:11.877498 2980 kubelet.go:394] "Adding apiserver pod source" Apr 16 03:52:11.877632 kubelet[2980]: I0416 03:52:11.877509 2980 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 16 03:52:11.958656 kubelet[2980]: I0416 03:52:11.956570 2980 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 16 03:52:11.965294 kubelet[2980]: I0416 03:52:11.964051 2980 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 16 03:52:11.965294 kubelet[2980]: I0416 03:52:11.964160 2980 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 16 03:52:12.088417 kubelet[2980]: I0416 03:52:12.074353 2980 server.go:1257] "Started kubelet" Apr 16 03:52:12.102671 kubelet[2980]: I0416 03:52:12.099484 2980 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 16 03:52:12.103608 kubelet[2980]: I0416 03:52:12.103584 2980 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 16 03:52:12.104803 kubelet[2980]: I0416 03:52:12.104778 2980 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 16 03:52:12.122018 kubelet[2980]: I0416 03:52:12.121983 2980 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 16 03:52:12.125598 kubelet[2980]: I0416 03:52:12.124468 2980 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 16 03:52:12.137694 kubelet[2980]: I0416 03:52:12.137663 2980 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 16 03:52:12.150792 kubelet[2980]: I0416 03:52:12.150692 2980 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 16 03:52:12.151303 kubelet[2980]: I0416 03:52:12.151122 2980 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 16 03:52:12.152932 kubelet[2980]: I0416 03:52:12.151542 2980 reconciler.go:29] "Reconciler: start to sync state" Apr 16 03:52:12.171548 kubelet[2980]: I0416 03:52:12.132989 2980 server.go:317] "Adding debug handlers to kubelet server" Apr 16 03:52:12.393716 kubelet[2980]: I0416 03:52:12.369038 2980 factory.go:223] Registration of the systemd container factory successfully Apr 16 03:52:12.393716 kubelet[2980]: I0416 03:52:12.369203 2980 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 16 03:52:12.422943 kubelet[2980]: I0416 03:52:12.422800 2980 factory.go:223] Registration of the containerd container factory successfully Apr 16 03:52:12.470865 kubelet[2980]: E0416 03:52:12.465908 2980 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 16 03:52:12.555774 kubelet[2980]: I0416 03:52:12.542421 2980 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 16 03:52:12.565144 kubelet[2980]: I0416 03:52:12.565117 2980 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 16 03:52:12.565364 kubelet[2980]: I0416 03:52:12.565300 2980 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 16 03:52:12.565620 kubelet[2980]: I0416 03:52:12.565336 2980 kubelet.go:2501] "Starting kubelet main sync loop" Apr 16 03:52:12.699690 kubelet[2980]: E0416 03:52:12.675828 2980 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 16 03:52:12.790908 kubelet[2980]: E0416 03:52:12.784780 2980 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 03:52:12.905433 kubelet[2980]: I0416 03:52:12.901581 2980 apiserver.go:52] "Watching apiserver" Apr 16 03:52:12.994798 kubelet[2980]: E0416 03:52:12.991998 2980 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 16 03:52:13.160904 kubelet[2980]: I0416 03:52:13.157800 2980 cpu_manager.go:225] "Starting" policy="none" Apr 16 03:52:13.160904 kubelet[2980]: I0416 03:52:13.159550 2980 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 16 03:52:13.160904 kubelet[2980]: I0416 03:52:13.159955 2980 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 16 03:52:13.160904 kubelet[2980]: I0416 03:52:13.160358 2980 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 16 03:52:13.160904 kubelet[2980]: I0416 03:52:13.160374 2980 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 16 03:52:13.160904 kubelet[2980]: I0416 03:52:13.160400 2980 policy_none.go:50] "Start" Apr 16 03:52:13.160904 kubelet[2980]: I0416 03:52:13.160413 2980 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 16 03:52:13.160904 kubelet[2980]: I0416 03:52:13.160425 2980 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 16 03:52:13.160904 kubelet[2980]: I0416 03:52:13.160556 2980 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 16 03:52:13.160904 kubelet[2980]: I0416 03:52:13.160566 2980 policy_none.go:44] "Start" Apr 16 03:52:13.326739 kubelet[2980]: E0416 03:52:13.324668 2980 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 16 03:52:13.339016 kubelet[2980]: I0416 03:52:13.338936 2980 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 16 03:52:13.344675 kubelet[2980]: I0416 03:52:13.344571 2980 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 16 03:52:13.375280 kubelet[2980]: I0416 03:52:13.374821 2980 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 16 03:52:13.471335 kubelet[2980]: E0416 03:52:13.470198 2980 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 16 03:52:13.491660 kubelet[2980]: I0416 03:52:13.487337 2980 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 16 03:52:13.491660 kubelet[2980]: I0416 03:52:13.488260 2980 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 16 03:52:13.494235 kubelet[2980]: I0416 03:52:13.493224 2980 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 16 03:52:13.559707 kubelet[2980]: I0416 03:52:13.557650 2980 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 16 03:52:13.572257 kubelet[2980]: I0416 03:52:13.568477 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:52:13.572257 kubelet[2980]: I0416 03:52:13.568515 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:52:13.572257 kubelet[2980]: I0416 03:52:13.568540 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:52:13.572257 kubelet[2980]: I0416 03:52:13.568565 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 16 03:52:13.572257 kubelet[2980]: I0416 03:52:13.568584 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e36b9f0b70aad9002c2c8f9f1da92c42-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e36b9f0b70aad9002c2c8f9f1da92c42\") " pod="kube-system/kube-apiserver-localhost" Apr 16 03:52:13.572536 kubelet[2980]: I0416 03:52:13.568612 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e36b9f0b70aad9002c2c8f9f1da92c42-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e36b9f0b70aad9002c2c8f9f1da92c42\") " pod="kube-system/kube-apiserver-localhost" Apr 16 03:52:13.572536 kubelet[2980]: I0416 03:52:13.568630 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e36b9f0b70aad9002c2c8f9f1da92c42-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e36b9f0b70aad9002c2c8f9f1da92c42\") " pod="kube-system/kube-apiserver-localhost" Apr 16 03:52:13.572536 kubelet[2980]: I0416 03:52:13.568648 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:52:13.572536 kubelet[2980]: I0416 03:52:13.568669 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 16 03:52:13.575473 kubelet[2980]: I0416 03:52:13.575205 2980 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 16 03:52:13.901404 kubelet[2980]: I0416 03:52:13.894421 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/102604ad-f0bc-4e5a-b79e-38de53bf7e5b-xtables-lock\") pod \"kube-proxy-jmwjb\" (UID: \"102604ad-f0bc-4e5a-b79e-38de53bf7e5b\") " pod="kube-system/kube-proxy-jmwjb" Apr 16 03:52:13.901404 kubelet[2980]: I0416 03:52:13.894715 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/102604ad-f0bc-4e5a-b79e-38de53bf7e5b-lib-modules\") pod \"kube-proxy-jmwjb\" (UID: \"102604ad-f0bc-4e5a-b79e-38de53bf7e5b\") " pod="kube-system/kube-proxy-jmwjb" Apr 16 03:52:13.901404 kubelet[2980]: I0416 03:52:13.894748 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/102604ad-f0bc-4e5a-b79e-38de53bf7e5b-kube-proxy\") pod \"kube-proxy-jmwjb\" (UID: \"102604ad-f0bc-4e5a-b79e-38de53bf7e5b\") " pod="kube-system/kube-proxy-jmwjb" Apr 16 03:52:13.901404 kubelet[2980]: I0416 03:52:13.894770 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6xs8\" (UniqueName: \"kubernetes.io/projected/102604ad-f0bc-4e5a-b79e-38de53bf7e5b-kube-api-access-v6xs8\") pod \"kube-proxy-jmwjb\" (UID: \"102604ad-f0bc-4e5a-b79e-38de53bf7e5b\") " pod="kube-system/kube-proxy-jmwjb" Apr 16 03:52:13.928765 kubelet[2980]: I0416 03:52:13.927741 2980 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 16 03:52:13.928765 kubelet[2980]: I0416 03:52:13.928000 2980 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 16 03:52:13.928765 kubelet[2980]: I0416 03:52:13.928040 2980 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 16 03:52:13.958185 containerd[1575]: time="2026-04-16T03:52:13.956003417Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 16 03:52:14.004045 kubelet[2980]: I0416 03:52:13.958009 2980 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 16 03:52:14.015443 kubelet[2980]: E0416 03:52:14.015412 2980 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: object "kube-system"/"kube-proxy" not registered Apr 16 03:52:14.015953 kubelet[2980]: E0416 03:52:14.015923 2980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/102604ad-f0bc-4e5a-b79e-38de53bf7e5b-kube-proxy podName:102604ad-f0bc-4e5a-b79e-38de53bf7e5b nodeName:}" failed. No retries permitted until 2026-04-16 03:52:14.515790367 +0000 UTC m=+3.556285358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/102604ad-f0bc-4e5a-b79e-38de53bf7e5b-kube-proxy") pod "kube-proxy-jmwjb" (UID: "102604ad-f0bc-4e5a-b79e-38de53bf7e5b") : object "kube-system"/"kube-proxy" not registered Apr 16 03:52:14.121530 systemd[1]: Created slice kubepods-besteffort-pod102604ad_f0bc_4e5a_b79e_38de53bf7e5b.slice - libcontainer container kubepods-besteffort-pod102604ad_f0bc_4e5a_b79e_38de53bf7e5b.slice. Apr 16 03:52:14.450770 kubelet[2980]: I0416 03:52:14.443806 2980 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.443713514 podStartE2EDuration="1.443713514s" podCreationTimestamp="2026-04-16 03:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 03:52:14.432424448 +0000 UTC m=+3.472919444" watchObservedRunningTime="2026-04-16 03:52:14.443713514 +0000 UTC m=+3.484208496" Apr 16 03:52:14.886899 kubelet[2980]: I0416 03:52:14.841704 2980 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.84153115 podStartE2EDuration="1.84153115s" podCreationTimestamp="2026-04-16 03:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 03:52:14.809464958 +0000 UTC m=+3.849959951" watchObservedRunningTime="2026-04-16 03:52:14.84153115 +0000 UTC m=+3.882026132" Apr 16 03:52:14.886899 kubelet[2980]: I0416 03:52:14.884983 2980 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.884960032 podStartE2EDuration="1.884960032s" podCreationTimestamp="2026-04-16 03:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 03:52:14.533390278 +0000 UTC m=+3.573885276" watchObservedRunningTime="2026-04-16 03:52:14.884960032 +0000 UTC m=+3.925455034" Apr 16 03:52:15.128170 containerd[1575]: time="2026-04-16T03:52:15.122668951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jmwjb,Uid:102604ad-f0bc-4e5a-b79e-38de53bf7e5b,Namespace:kube-system,Attempt:0,}" Apr 16 03:52:15.398029 containerd[1575]: time="2026-04-16T03:52:15.397180869Z" level=info msg="connecting to shim 82651b60f478dea8943f80e4cc106547c83d0ef9e3101dce4bd763b673a69653" address="unix:///run/containerd/s/073d013c4504fe16e310b39e81e5ae3cd13ff3f5f3e8713dea6cbb42a70cfd70" namespace=k8s.io protocol=ttrpc version=3 Apr 16 03:52:15.529542 systemd[1]: Started cri-containerd-82651b60f478dea8943f80e4cc106547c83d0ef9e3101dce4bd763b673a69653.scope - libcontainer container 82651b60f478dea8943f80e4cc106547c83d0ef9e3101dce4bd763b673a69653. Apr 16 03:52:15.680065 containerd[1575]: time="2026-04-16T03:52:15.674587206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jmwjb,Uid:102604ad-f0bc-4e5a-b79e-38de53bf7e5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"82651b60f478dea8943f80e4cc106547c83d0ef9e3101dce4bd763b673a69653\"" Apr 16 03:52:15.696035 containerd[1575]: time="2026-04-16T03:52:15.690945308Z" level=info msg="CreateContainer within sandbox \"82651b60f478dea8943f80e4cc106547c83d0ef9e3101dce4bd763b673a69653\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 16 03:52:15.769313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3681511041.mount: Deactivated successfully. Apr 16 03:52:15.779205 containerd[1575]: time="2026-04-16T03:52:15.779062781Z" level=info msg="Container 0470d6c0fc45d20180cecab748ab479f32d3e60fe166f656c595c7b8b6ef6d9e: CDI devices from CRI Config.CDIDevices: []" Apr 16 03:52:15.832765 containerd[1575]: time="2026-04-16T03:52:15.832331702Z" level=info msg="CreateContainer within sandbox \"82651b60f478dea8943f80e4cc106547c83d0ef9e3101dce4bd763b673a69653\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0470d6c0fc45d20180cecab748ab479f32d3e60fe166f656c595c7b8b6ef6d9e\"" Apr 16 03:52:15.835579 containerd[1575]: time="2026-04-16T03:52:15.835506160Z" level=info msg="StartContainer for \"0470d6c0fc45d20180cecab748ab479f32d3e60fe166f656c595c7b8b6ef6d9e\"" Apr 16 03:52:15.857873 containerd[1575]: time="2026-04-16T03:52:15.854798510Z" level=info msg="connecting to shim 0470d6c0fc45d20180cecab748ab479f32d3e60fe166f656c595c7b8b6ef6d9e" address="unix:///run/containerd/s/073d013c4504fe16e310b39e81e5ae3cd13ff3f5f3e8713dea6cbb42a70cfd70" protocol=ttrpc version=3 Apr 16 03:52:16.118141 systemd[1]: Started cri-containerd-0470d6c0fc45d20180cecab748ab479f32d3e60fe166f656c595c7b8b6ef6d9e.scope - libcontainer container 0470d6c0fc45d20180cecab748ab479f32d3e60fe166f656c595c7b8b6ef6d9e. Apr 16 03:52:16.166212 systemd[1]: Created slice kubepods-besteffort-pod1fd5a14c_9f90_43e3_abf1_9685462b990b.slice - libcontainer container kubepods-besteffort-pod1fd5a14c_9f90_43e3_abf1_9685462b990b.slice. Apr 16 03:52:16.273574 kubelet[2980]: I0416 03:52:16.268225 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzxnj\" (UniqueName: \"kubernetes.io/projected/1fd5a14c-9f90-43e3-abf1-9685462b990b-kube-api-access-lzxnj\") pod \"tigera-operator-6cf4cccc57-mwc4j\" (UID: \"1fd5a14c-9f90-43e3-abf1-9685462b990b\") " pod="tigera-operator/tigera-operator-6cf4cccc57-mwc4j" Apr 16 03:52:16.273574 kubelet[2980]: I0416 03:52:16.268454 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1fd5a14c-9f90-43e3-abf1-9685462b990b-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-mwc4j\" (UID: \"1fd5a14c-9f90-43e3-abf1-9685462b990b\") " pod="tigera-operator/tigera-operator-6cf4cccc57-mwc4j" Apr 16 03:52:16.373658 containerd[1575]: time="2026-04-16T03:52:16.358214869Z" level=info msg="StartContainer for \"0470d6c0fc45d20180cecab748ab479f32d3e60fe166f656c595c7b8b6ef6d9e\" returns successfully" Apr 16 03:52:16.860812 containerd[1575]: time="2026-04-16T03:52:16.859845798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-mwc4j,Uid:1fd5a14c-9f90-43e3-abf1-9685462b990b,Namespace:tigera-operator,Attempt:0,}" Apr 16 03:52:17.262072 containerd[1575]: time="2026-04-16T03:52:17.243817327Z" level=info msg="connecting to shim c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db" address="unix:///run/containerd/s/b40817c4c3b3e5498badbc035a393ccaaa43aaaa06e8111d2e4d4485037a2b06" namespace=k8s.io protocol=ttrpc version=3 Apr 16 03:52:17.429581 kubelet[2980]: I0416 03:52:17.428929 2980 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-jmwjb" podStartSLOduration=4.428910973 podStartE2EDuration="4.428910973s" podCreationTimestamp="2026-04-16 03:52:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-16 03:52:17.428758125 +0000 UTC m=+6.469253120" watchObservedRunningTime="2026-04-16 03:52:17.428910973 +0000 UTC m=+6.469405965" Apr 16 03:52:19.395926 systemd[1]: Started cri-containerd-c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db.scope - libcontainer container c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db. Apr 16 03:52:21.086224 kubelet[2980]: E0416 03:52:21.083917 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.496s" Apr 16 03:52:21.517019 containerd[1575]: time="2026-04-16T03:52:21.516078493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-mwc4j,Uid:1fd5a14c-9f90-43e3-abf1-9685462b990b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db\"" Apr 16 03:52:21.544671 containerd[1575]: time="2026-04-16T03:52:21.543671604Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 16 03:52:24.396843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4223232401.mount: Deactivated successfully. Apr 16 03:52:35.428383 containerd[1575]: time="2026-04-16T03:52:35.427289983Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:52:35.431649 containerd[1575]: time="2026-04-16T03:52:35.430320544Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 16 03:52:35.446349 containerd[1575]: time="2026-04-16T03:52:35.445406272Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:52:35.486575 containerd[1575]: time="2026-04-16T03:52:35.486420337Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 03:52:35.487393 containerd[1575]: time="2026-04-16T03:52:35.487039367Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 13.943243484s" Apr 16 03:52:35.487393 containerd[1575]: time="2026-04-16T03:52:35.487248174Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 16 03:52:35.511786 containerd[1575]: time="2026-04-16T03:52:35.511219198Z" level=info msg="CreateContainer within sandbox \"c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 16 03:52:35.559261 containerd[1575]: time="2026-04-16T03:52:35.558996723Z" level=info msg="Container 7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe: CDI devices from CRI Config.CDIDevices: []" Apr 16 03:52:35.632726 containerd[1575]: time="2026-04-16T03:52:35.619727341Z" level=info msg="CreateContainer within sandbox \"c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe\"" Apr 16 03:52:35.632726 containerd[1575]: time="2026-04-16T03:52:35.627415116Z" level=info msg="StartContainer for \"7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe\"" Apr 16 03:52:35.661204 containerd[1575]: time="2026-04-16T03:52:35.657661830Z" level=info msg="connecting to shim 7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe" address="unix:///run/containerd/s/b40817c4c3b3e5498badbc035a393ccaaa43aaaa06e8111d2e4d4485037a2b06" protocol=ttrpc version=3 Apr 16 03:52:35.866538 systemd[1]: Started cri-containerd-7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe.scope - libcontainer container 7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe. Apr 16 03:52:36.115796 containerd[1575]: time="2026-04-16T03:52:36.113786732Z" level=info msg="StartContainer for \"7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe\" returns successfully" Apr 16 03:52:52.776489 kubelet[2980]: E0416 03:52:52.776136 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.197s" Apr 16 03:53:01.175896 kubelet[2980]: E0416 03:53:01.067766 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.422s" Apr 16 03:53:03.175079 kubelet[2980]: E0416 03:53:03.161600 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.927s" Apr 16 03:53:08.290471 kubelet[2980]: E0416 03:53:08.286221 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.713s" Apr 16 03:53:09.655591 kubelet[2980]: E0416 03:53:09.644945 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.057s" Apr 16 03:53:10.380582 systemd[1]: cri-containerd-fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a.scope: Deactivated successfully. Apr 16 03:53:10.672754 systemd[1]: cri-containerd-fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a.scope: Consumed 11.828s CPU time, 53.4M memory peak. Apr 16 03:53:11.710536 containerd[1575]: time="2026-04-16T03:53:11.087985191Z" level=info msg="received container exit event container_id:\"fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a\" id:\"fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a\" pid:2827 exit_status:1 exited_at:{seconds:1776311591 nanos:86790208}" Apr 16 03:53:14.908984 kubelet[2980]: E0416 03:53:14.903044 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.196s" Apr 16 03:53:16.760202 kubelet[2980]: E0416 03:53:15.583601 2980 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 16 03:53:21.540380 containerd[1575]: time="2026-04-16T03:53:21.538157689Z" level=error msg="failed to handle container TaskExit event container_id:\"fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a\" id:\"fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a\" pid:2827 exit_status:1 exited_at:{seconds:1776311591 nanos:86790208}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 16 03:53:22.796418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a-rootfs.mount: Deactivated successfully. Apr 16 03:53:23.722287 systemd[1]: cri-containerd-bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad.scope: Deactivated successfully. Apr 16 03:53:24.243081 containerd[1575]: time="2026-04-16T03:53:23.233567876Z" level=info msg="TaskExit event container_id:\"fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a\" id:\"fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a\" pid:2827 exit_status:1 exited_at:{seconds:1776311591 nanos:86790208}" Apr 16 03:53:24.243081 containerd[1575]: time="2026-04-16T03:53:24.031319282Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 16 03:53:23.727892 systemd[1]: cri-containerd-bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad.scope: Consumed 8.959s CPU time, 20.8M memory peak. Apr 16 03:53:25.610648 containerd[1575]: time="2026-04-16T03:53:24.373804477Z" level=info msg="received container exit event container_id:\"bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad\" id:\"bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad\" pid:2821 exit_status:1 exited_at:{seconds:1776311604 nanos:348869452}" Apr 16 03:53:28.978446 kubelet[2980]: E0416 03:53:28.978215 2980 controller.go:251] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 16 03:53:31.821730 kubelet[2980]: E0416 03:53:31.755987 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="16.852s" Apr 16 03:53:33.315992 containerd[1575]: time="2026-04-16T03:53:33.307012737Z" level=error msg="Failed to handle backOff event container_id:\"fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a\" id:\"fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a\" pid:2827 exit_status:1 exited_at:{seconds:1776311591 nanos:86790208} for fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 16 03:53:33.713508 kubelet[2980]: E0416 03:53:33.703050 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:53:33.855347 containerd[1575]: time="2026-04-16T03:53:33.854179113Z" level=error msg="ttrpc: received message on inactive stream" stream=57 Apr 16 03:53:34.144791 kubelet[2980]: E0416 03:53:34.059000 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:53:34.460305 containerd[1575]: time="2026-04-16T03:53:34.443026764Z" level=error msg="failed to handle container TaskExit event container_id:\"bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad\" id:\"bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad\" pid:2821 exit_status:1 exited_at:{seconds:1776311604 nanos:348869452}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 16 03:53:34.646791 kubelet[2980]: E0416 03:53:34.645678 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.958s" Apr 16 03:53:34.756798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad-rootfs.mount: Deactivated successfully. Apr 16 03:53:34.765679 kubelet[2980]: I0416 03:53:34.765592 2980 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-mwc4j" podStartSLOduration=65.81476721 podStartE2EDuration="1m19.765574261s" podCreationTimestamp="2026-04-16 03:52:15 +0000 UTC" firstStartedPulling="2026-04-16 03:52:21.542600066 +0000 UTC m=+10.583095049" lastFinishedPulling="2026-04-16 03:52:35.493407112 +0000 UTC m=+24.533902100" observedRunningTime="2026-04-16 03:52:36.865140051 +0000 UTC m=+25.905635041" watchObservedRunningTime="2026-04-16 03:53:34.765574261 +0000 UTC m=+83.806069258" Apr 16 03:53:34.823608 containerd[1575]: time="2026-04-16T03:53:34.819644314Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 16 03:53:36.166217 containerd[1575]: time="2026-04-16T03:53:36.163536906Z" level=info msg="TaskExit event container_id:\"fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a\" id:\"fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a\" pid:2827 exit_status:1 exited_at:{seconds:1776311591 nanos:86790208}" Apr 16 03:53:36.710378 containerd[1575]: time="2026-04-16T03:53:36.709398909Z" level=info msg="TaskExit event container_id:\"bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad\" id:\"bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad\" pid:2821 exit_status:1 exited_at:{seconds:1776311604 nanos:348869452}" Apr 16 03:53:36.871333 kubelet[2980]: I0416 03:53:36.871304 2980 scope.go:122] "RemoveContainer" containerID="fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a" Apr 16 03:53:36.872273 kubelet[2980]: E0416 03:53:36.872253 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:53:36.952225 containerd[1575]: time="2026-04-16T03:53:36.951361750Z" level=info msg="CreateContainer within sandbox \"02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 16 03:53:37.179896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2798651801.mount: Deactivated successfully. Apr 16 03:53:37.184879 containerd[1575]: time="2026-04-16T03:53:37.184143399Z" level=info msg="Container a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63: CDI devices from CRI Config.CDIDevices: []" Apr 16 03:53:37.334757 containerd[1575]: time="2026-04-16T03:53:37.334671214Z" level=info msg="CreateContainer within sandbox \"02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63\"" Apr 16 03:53:37.347509 containerd[1575]: time="2026-04-16T03:53:37.339672718Z" level=info msg="StartContainer for \"a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63\"" Apr 16 03:53:37.357804 containerd[1575]: time="2026-04-16T03:53:37.357758598Z" level=info msg="connecting to shim a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63" address="unix:///run/containerd/s/b0f2c5cfffdebf676e7ed85c3328df6a87775c2b04620a5f0b47a494ee449f34" protocol=ttrpc version=3 Apr 16 03:53:37.686739 systemd[1]: Started cri-containerd-a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63.scope - libcontainer container a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63. Apr 16 03:53:37.974430 kubelet[2980]: I0416 03:53:37.973601 2980 scope.go:122] "RemoveContainer" containerID="bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad" Apr 16 03:53:37.987455 kubelet[2980]: E0416 03:53:37.975984 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:53:38.193798 containerd[1575]: time="2026-04-16T03:53:38.193684669Z" level=info msg="CreateContainer within sandbox \"b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 16 03:53:38.287303 containerd[1575]: time="2026-04-16T03:53:38.285657029Z" level=info msg="Container c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd: CDI devices from CRI Config.CDIDevices: []" Apr 16 03:53:38.413249 containerd[1575]: time="2026-04-16T03:53:38.391080389Z" level=info msg="StartContainer for \"a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63\" returns successfully" Apr 16 03:53:38.701072 containerd[1575]: time="2026-04-16T03:53:38.534811807Z" level=info msg="CreateContainer within sandbox \"b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd\"" Apr 16 03:53:38.743454 containerd[1575]: time="2026-04-16T03:53:38.742916120Z" level=info msg="StartContainer for \"c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd\"" Apr 16 03:53:38.965441 containerd[1575]: time="2026-04-16T03:53:38.957241289Z" level=info msg="connecting to shim c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd" address="unix:///run/containerd/s/64fc1d346c666396b4a6f4eda52f8f58d8abeacdc8da519fac54d1b45f3029a3" protocol=ttrpc version=3 Apr 16 03:53:39.829962 kubelet[2980]: E0416 03:53:39.823354 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:53:40.796688 systemd[1]: Started cri-containerd-c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd.scope - libcontainer container c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd. Apr 16 03:53:41.447294 kubelet[2980]: E0416 03:53:41.426423 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:53:42.485655 containerd[1575]: time="2026-04-16T03:53:42.481501509Z" level=info msg="StartContainer for \"c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd\" returns successfully" Apr 16 03:53:42.574548 kubelet[2980]: E0416 03:53:42.574235 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:53:43.737171 kubelet[2980]: E0416 03:53:43.721856 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:53:45.063713 kubelet[2980]: E0416 03:53:45.062923 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:53:45.723206 kubelet[2980]: E0416 03:53:45.722856 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:53:46.162219 sudo[1805]: pam_unix(sudo:session): session closed for user root Apr 16 03:53:46.213063 sshd[1804]: Connection closed by 10.0.0.1 port 39198 Apr 16 03:53:46.240400 sshd-session[1801]: pam_unix(sshd:session): session closed for user core Apr 16 03:53:46.610853 kubelet[2980]: E0416 03:53:46.426756 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:53:46.746173 systemd[1]: sshd@6-10.0.0.115:22-10.0.0.1:39198.service: Deactivated successfully. Apr 16 03:53:47.539057 systemd[1]: session-7.scope: Deactivated successfully. Apr 16 03:53:47.745714 systemd[1]: session-7.scope: Consumed 22.637s CPU time, 231.3M memory peak. Apr 16 03:53:48.277136 systemd-logind[1549]: Session 7 logged out. Waiting for processes to exit. Apr 16 03:53:48.569000 kubelet[2980]: E0416 03:53:48.278048 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.709s" Apr 16 03:53:48.685770 systemd-logind[1549]: Removed session 7. Apr 16 03:53:48.936081 kubelet[2980]: E0416 03:53:48.932786 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:53:50.106486 kubelet[2980]: E0416 03:53:50.070692 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.383s" Apr 16 03:53:54.039123 kubelet[2980]: E0416 03:53:54.038819 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.351s" Apr 16 03:53:56.029542 kubelet[2980]: E0416 03:53:55.979543 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.917s" Apr 16 03:53:57.440620 kubelet[2980]: E0416 03:53:57.434946 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.364s" Apr 16 03:53:57.925834 kubelet[2980]: E0416 03:53:57.925395 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:53:58.047978 kubelet[2980]: E0416 03:53:58.045157 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:54:10.048049 systemd[1]: cri-containerd-a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63.scope: Deactivated successfully. Apr 16 03:54:10.257582 systemd[1]: cri-containerd-a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63.scope: Consumed 5.080s CPU time, 17.5M memory peak. Apr 16 03:54:11.159648 containerd[1575]: time="2026-04-16T03:54:10.863545707Z" level=info msg="received container exit event container_id:\"a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63\" id:\"a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63\" pid:3472 exit_status:1 exited_at:{seconds:1776311650 nanos:858004058}" Apr 16 03:54:11.229640 systemd[1]: cri-containerd-7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe.scope: Deactivated successfully. Apr 16 03:54:11.278583 systemd[1]: cri-containerd-7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe.scope: Consumed 12.027s CPU time, 63.7M memory peak. Apr 16 03:54:11.895913 containerd[1575]: time="2026-04-16T03:54:11.865638731Z" level=info msg="received container exit event container_id:\"7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe\" id:\"7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe\" pid:3331 exit_status:1 exited_at:{seconds:1776311651 nanos:753203457}" Apr 16 03:54:12.371483 kubelet[2980]: E0416 03:54:12.285400 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="13.7s" Apr 16 03:54:12.565672 kubelet[2980]: E0416 03:54:12.449984 2980 kubelet_node_status.go:386] "Node not becoming ready in time after startup" Apr 16 03:54:13.481669 kubelet[2980]: E0416 03:54:13.460464 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:54:13.879682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63-rootfs.mount: Deactivated successfully. Apr 16 03:54:13.881339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe-rootfs.mount: Deactivated successfully. Apr 16 03:54:14.727762 kubelet[2980]: I0416 03:54:14.727260 2980 scope.go:122] "RemoveContainer" containerID="fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a" Apr 16 03:54:14.742884 kubelet[2980]: I0416 03:54:14.741871 2980 scope.go:122] "RemoveContainer" containerID="a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63" Apr 16 03:54:14.742884 kubelet[2980]: I0416 03:54:14.741978 2980 scope.go:122] "RemoveContainer" containerID="7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe" Apr 16 03:54:14.743388 kubelet[2980]: E0416 03:54:14.743020 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:54:14.764636 kubelet[2980]: E0416 03:54:14.763994 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 03:54:14.890901 containerd[1575]: time="2026-04-16T03:54:14.890375786Z" level=info msg="RemoveContainer for \"fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a\"" Apr 16 03:54:14.964401 containerd[1575]: time="2026-04-16T03:54:14.949601610Z" level=info msg="CreateContainer within sandbox \"c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 16 03:54:15.026355 containerd[1575]: time="2026-04-16T03:54:15.012226570Z" level=info msg="RemoveContainer for \"fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a\" returns successfully" Apr 16 03:54:15.545525 containerd[1575]: time="2026-04-16T03:54:15.545223643Z" level=info msg="Container 3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4: CDI devices from CRI Config.CDIDevices: []" Apr 16 03:54:15.693551 containerd[1575]: time="2026-04-16T03:54:15.693375380Z" level=info msg="CreateContainer within sandbox \"c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4\"" Apr 16 03:54:15.776639 containerd[1575]: time="2026-04-16T03:54:15.771237167Z" level=info msg="StartContainer for \"3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4\"" Apr 16 03:54:15.911193 containerd[1575]: time="2026-04-16T03:54:15.910419658Z" level=info msg="connecting to shim 3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4" address="unix:///run/containerd/s/b40817c4c3b3e5498badbc035a393ccaaa43aaaa06e8111d2e4d4485037a2b06" protocol=ttrpc version=3 Apr 16 03:54:17.125345 systemd[1]: Started cri-containerd-3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4.scope - libcontainer container 3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4. Apr 16 03:54:17.749408 kubelet[2980]: E0416 03:54:17.745387 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:54:19.117461 containerd[1575]: time="2026-04-16T03:54:19.116458609Z" level=error msg="get state for 3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4" error="context deadline exceeded" Apr 16 03:54:19.158283 containerd[1575]: time="2026-04-16T03:54:19.119251895Z" level=warning msg="unknown status" status=0 Apr 16 03:54:19.545912 kubelet[2980]: I0416 03:54:19.545469 2980 scope.go:122] "RemoveContainer" containerID="a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63" Apr 16 03:54:19.545912 kubelet[2980]: E0416 03:54:19.545911 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:54:19.545912 kubelet[2980]: E0416 03:54:19.546195 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 03:54:21.082192 kubelet[2980]: I0416 03:54:21.069743 2980 scope.go:122] "RemoveContainer" containerID="7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe" Apr 16 03:54:21.465565 containerd[1575]: time="2026-04-16T03:54:21.334512109Z" level=error msg="get state for 3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4" error="context deadline exceeded" Apr 16 03:54:21.465565 containerd[1575]: time="2026-04-16T03:54:21.334835169Z" level=warning msg="unknown status" status=0 Apr 16 03:54:21.903757 kubelet[2980]: I0416 03:54:21.903670 2980 scope.go:122] "RemoveContainer" containerID="a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63" Apr 16 03:54:21.934710 kubelet[2980]: E0416 03:54:21.925218 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:54:22.316546 containerd[1575]: time="2026-04-16T03:54:22.276275109Z" level=info msg="RemoveContainer for \"7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe\"" Apr 16 03:54:22.632928 containerd[1575]: time="2026-04-16T03:54:22.577641402Z" level=info msg="CreateContainer within sandbox \"02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:2,}" Apr 16 03:54:23.533699 kubelet[2980]: E0416 03:54:23.519412 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:54:23.928360 containerd[1575]: time="2026-04-16T03:54:23.923680623Z" level=error msg="get state for 3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4" error="context deadline exceeded" Apr 16 03:54:23.928360 containerd[1575]: time="2026-04-16T03:54:23.927753726Z" level=warning msg="unknown status" status=0 Apr 16 03:54:24.597688 containerd[1575]: time="2026-04-16T03:54:24.570485050Z" level=error msg="get state for c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db" error="context deadline exceeded" Apr 16 03:54:25.024975 containerd[1575]: time="2026-04-16T03:54:24.750745040Z" level=warning msg="unknown status" status=0 Apr 16 03:54:25.130483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4007269110.mount: Deactivated successfully. Apr 16 03:54:25.449728 containerd[1575]: time="2026-04-16T03:54:25.346952846Z" level=info msg="Container 35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9: CDI devices from CRI Config.CDIDevices: []" Apr 16 03:54:25.395631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2265242512.mount: Deactivated successfully. Apr 16 03:54:25.695146 kubelet[2980]: E0416 03:54:25.694669 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.127s" Apr 16 03:54:26.498754 containerd[1575]: time="2026-04-16T03:54:26.490535230Z" level=info msg="RemoveContainer for \"7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe\" returns successfully" Apr 16 03:54:26.938680 containerd[1575]: time="2026-04-16T03:54:26.937650731Z" level=error msg="get state for 3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4" error="context deadline exceeded" Apr 16 03:54:26.938680 containerd[1575]: time="2026-04-16T03:54:26.937873211Z" level=warning msg="unknown status" status=0 Apr 16 03:54:27.131271 kubelet[2980]: E0416 03:54:27.127208 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.432s" Apr 16 03:54:27.733896 containerd[1575]: time="2026-04-16T03:54:27.712535674Z" level=error msg="ttrpc: received message on inactive stream" stream=31 Apr 16 03:54:27.733896 containerd[1575]: time="2026-04-16T03:54:27.715432644Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 03:54:27.733896 containerd[1575]: time="2026-04-16T03:54:27.726047095Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 16 03:54:27.733896 containerd[1575]: time="2026-04-16T03:54:27.726389238Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 16 03:54:27.733896 containerd[1575]: time="2026-04-16T03:54:27.726404769Z" level=error msg="ttrpc: received message on inactive stream" stream=9 Apr 16 03:54:27.946747 containerd[1575]: time="2026-04-16T03:54:27.944264829Z" level=info msg="CreateContainer within sandbox \"02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:2,} returns container id \"35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9\"" Apr 16 03:54:28.221184 containerd[1575]: time="2026-04-16T03:54:28.144286647Z" level=info msg="StartContainer for \"35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9\"" Apr 16 03:54:28.276460 containerd[1575]: time="2026-04-16T03:54:28.273886885Z" level=info msg="connecting to shim 35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9" address="unix:///run/containerd/s/b0f2c5cfffdebf676e7ed85c3328df6a87775c2b04620a5f0b47a494ee449f34" protocol=ttrpc version=3 Apr 16 03:54:28.349771 containerd[1575]: time="2026-04-16T03:54:28.347447965Z" level=info msg="StartContainer for \"3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4\" returns successfully" Apr 16 03:54:28.578571 kubelet[2980]: E0416 03:54:28.577905 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:54:29.203168 systemd[1]: Started cri-containerd-35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9.scope - libcontainer container 35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9. Apr 16 03:54:31.250748 containerd[1575]: time="2026-04-16T03:54:31.222795135Z" level=error msg="get state for 35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9" error="context deadline exceeded" Apr 16 03:54:31.466387 containerd[1575]: time="2026-04-16T03:54:31.263042455Z" level=warning msg="unknown status" status=0 Apr 16 03:54:31.839797 kubelet[2980]: E0416 03:54:31.837912 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.2s" Apr 16 03:54:32.865573 containerd[1575]: time="2026-04-16T03:54:32.794293853Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 16 03:54:33.872278 kubelet[2980]: E0416 03:54:33.867891 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:54:34.363305 kubelet[2980]: E0416 03:54:34.354936 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.776s" Apr 16 03:54:35.558760 containerd[1575]: time="2026-04-16T03:54:35.558242253Z" level=info msg="StartContainer for \"35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9\" returns successfully" Apr 16 03:54:36.322709 kubelet[2980]: E0416 03:54:36.321700 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.554s" Apr 16 03:54:37.855726 kubelet[2980]: E0416 03:54:37.766078 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.182s" Apr 16 03:54:38.214790 kubelet[2980]: E0416 03:54:38.139293 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:54:39.191692 kubelet[2980]: E0416 03:54:39.182696 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:54:41.584950 kubelet[2980]: E0416 03:54:41.584486 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:54:42.688076 kubelet[2980]: E0416 03:54:42.681744 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:54:44.378565 kubelet[2980]: E0416 03:54:44.377608 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:54:45.875652 kubelet[2980]: E0416 03:54:45.843719 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.268s" Apr 16 03:54:48.146350 kubelet[2980]: E0416 03:54:48.135809 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.551s" Apr 16 03:54:49.757391 kubelet[2980]: E0416 03:54:49.742390 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:54:49.966112 kubelet[2980]: E0416 03:54:49.949621 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.2s" Apr 16 03:54:54.223787 kubelet[2980]: E0416 03:54:54.215861 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.158s" Apr 16 03:54:55.367558 kubelet[2980]: E0416 03:54:55.359543 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:54:56.180582 kubelet[2980]: E0416 03:54:56.167524 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:54:58.189490 kubelet[2980]: E0416 03:54:58.185991 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.729s" Apr 16 03:55:01.016442 kubelet[2980]: E0416 03:55:01.012686 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:55:06.053635 kubelet[2980]: E0416 03:55:06.049609 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:55:11.693503 kubelet[2980]: E0416 03:55:11.488885 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:55:12.902963 kubelet[2980]: E0416 03:55:12.902748 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:55:16.821480 kubelet[2980]: E0416 03:55:16.817002 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:55:21.859153 kubelet[2980]: E0416 03:55:21.858617 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:55:27.207473 kubelet[2980]: E0416 03:55:27.181778 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:55:32.321664 kubelet[2980]: E0416 03:55:32.320147 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:55:36.240135 systemd[1]: Started sshd@7-10.0.0.115:22-10.0.0.1:40254.service - OpenSSH per-connection server daemon (10.0.0.1:40254). Apr 16 03:55:37.518215 kubelet[2980]: E0416 03:55:37.487636 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:55:39.056917 sshd[3662]: Accepted publickey for core from 10.0.0.1 port 40254 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:55:39.229159 sshd-session[3662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:55:39.413291 systemd-logind[1549]: New session 8 of user core. Apr 16 03:55:39.667473 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 16 03:55:43.132000 kubelet[2980]: E0416 03:55:43.127589 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:55:44.672829 kubelet[2980]: E0416 03:55:44.659827 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.004s" Apr 16 03:55:45.446935 kubelet[2980]: E0416 03:55:45.222916 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:55:48.661567 kubelet[2980]: E0416 03:55:48.654459 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:55:51.289908 kubelet[2980]: E0416 03:55:51.281838 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.657s" Apr 16 03:55:52.652556 kubelet[2980]: E0416 03:55:52.576825 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.294s" Apr 16 03:55:53.922402 kubelet[2980]: E0416 03:55:53.888853 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.218s" Apr 16 03:55:54.426846 kubelet[2980]: E0416 03:55:54.416529 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:55:56.030258 kubelet[2980]: E0416 03:55:56.021969 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.099s" Apr 16 03:55:59.032424 sshd[3665]: Connection closed by 10.0.0.1 port 40254 Apr 16 03:55:59.032351 sshd-session[3662]: pam_unix(sshd:session): session closed for user core Apr 16 03:55:59.202696 systemd[1]: sshd@7-10.0.0.115:22-10.0.0.1:40254.service: Deactivated successfully. Apr 16 03:55:59.361078 systemd[1]: session-8.scope: Deactivated successfully. Apr 16 03:55:59.366935 systemd[1]: session-8.scope: Consumed 7.273s CPU time, 14.9M memory peak. Apr 16 03:55:59.542852 systemd-logind[1549]: Session 8 logged out. Waiting for processes to exit. Apr 16 03:55:59.545202 systemd-logind[1549]: Removed session 8. Apr 16 03:55:59.575170 kubelet[2980]: E0416 03:55:59.574392 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:56:01.644211 kubelet[2980]: E0416 03:56:01.643496 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:56:04.685315 kubelet[2980]: E0416 03:56:04.682757 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:56:04.728888 systemd[1]: Started sshd@8-10.0.0.115:22-10.0.0.1:39798.service - OpenSSH per-connection server daemon (10.0.0.1:39798). Apr 16 03:56:08.889876 kubelet[2980]: E0416 03:56:08.888720 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:56:09.582638 sshd[3688]: Accepted publickey for core from 10.0.0.1 port 39798 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:56:09.895700 sshd-session[3688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:56:10.214695 kubelet[2980]: E0416 03:56:10.190696 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:56:10.354907 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 16 03:56:10.376921 systemd-logind[1549]: New session 9 of user core. Apr 16 03:56:15.672939 kubelet[2980]: E0416 03:56:15.661528 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:56:16.044818 kubelet[2980]: E0416 03:56:16.027363 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.443s" Apr 16 03:56:17.741362 kubelet[2980]: E0416 03:56:17.739406 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.118s" Apr 16 03:56:21.114885 kubelet[2980]: E0416 03:56:21.059137 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:56:22.606187 kubelet[2980]: E0416 03:56:22.602775 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.012s" Apr 16 03:56:26.469027 kubelet[2980]: E0416 03:56:26.468226 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:56:26.959192 sshd[3692]: Connection closed by 10.0.0.1 port 39798 Apr 16 03:56:27.081523 sshd-session[3688]: pam_unix(sshd:session): session closed for user core Apr 16 03:56:27.784751 systemd[1]: sshd@8-10.0.0.115:22-10.0.0.1:39798.service: Deactivated successfully. Apr 16 03:56:28.056674 systemd[1]: sshd@8-10.0.0.115:22-10.0.0.1:39798.service: Consumed 1.842s CPU time, 3.2M memory peak. Apr 16 03:56:28.134804 systemd[1]: session-9.scope: Deactivated successfully. Apr 16 03:56:28.135543 systemd[1]: session-9.scope: Consumed 6.995s CPU time, 14.6M memory peak. Apr 16 03:56:28.334727 systemd-logind[1549]: Session 9 logged out. Waiting for processes to exit. Apr 16 03:56:28.386944 systemd[1]: cri-containerd-3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4.scope: Deactivated successfully. Apr 16 03:56:28.452992 systemd[1]: cri-containerd-3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4.scope: Consumed 16.753s CPU time, 54.3M memory peak. Apr 16 03:56:28.528990 containerd[1575]: time="2026-04-16T03:56:28.528418429Z" level=info msg="received container exit event container_id:\"3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4\" id:\"3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4\" pid:3595 exit_status:1 exited_at:{seconds:1776311788 nanos:482377064}" Apr 16 03:56:28.576701 systemd-logind[1549]: Removed session 9. Apr 16 03:56:29.692941 kubelet[2980]: E0416 03:56:29.679868 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.112s" Apr 16 03:56:31.652314 kubelet[2980]: E0416 03:56:31.646786 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:56:32.127830 systemd[1]: Started sshd@9-10.0.0.115:22-10.0.0.1:38362.service - OpenSSH per-connection server daemon (10.0.0.1:38362). Apr 16 03:56:32.653140 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4-rootfs.mount: Deactivated successfully. Apr 16 03:56:33.388439 kubelet[2980]: I0416 03:56:33.387911 2980 scope.go:122] "RemoveContainer" containerID="3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4" Apr 16 03:56:33.414500 kubelet[2980]: E0416 03:56:33.388915 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-mwc4j_tigera-operator(1fd5a14c-9f90-43e3-abf1-9685462b990b)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-mwc4j" podUID="1fd5a14c-9f90-43e3-abf1-9685462b990b" Apr 16 03:56:33.806051 sshd[3725]: Accepted publickey for core from 10.0.0.1 port 38362 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:56:33.829241 sshd-session[3725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:56:34.506008 systemd-logind[1549]: New session 10 of user core. Apr 16 03:56:34.573680 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 16 03:56:36.817025 kubelet[2980]: E0416 03:56:36.815564 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:56:36.964074 kubelet[2980]: E0416 03:56:36.791878 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:56:39.946336 kubelet[2980]: I0416 03:56:39.889436 2980 scope.go:122] "RemoveContainer" containerID="3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4" Apr 16 03:56:40.794589 containerd[1575]: time="2026-04-16T03:56:40.786041737Z" level=info msg="CreateContainer within sandbox \"c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" Apr 16 03:56:42.171458 kubelet[2980]: E0416 03:56:42.160499 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:56:42.752796 containerd[1575]: time="2026-04-16T03:56:42.561393383Z" level=info msg="Container 6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf: CDI devices from CRI Config.CDIDevices: []" Apr 16 03:56:43.542847 containerd[1575]: time="2026-04-16T03:56:43.540684553Z" level=info msg="CreateContainer within sandbox \"c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf\"" Apr 16 03:56:43.629447 containerd[1575]: time="2026-04-16T03:56:43.629130073Z" level=info msg="StartContainer for \"6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf\"" Apr 16 03:56:43.751811 containerd[1575]: time="2026-04-16T03:56:43.751739137Z" level=info msg="connecting to shim 6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf" address="unix:///run/containerd/s/b40817c4c3b3e5498badbc035a393ccaaa43aaaa06e8111d2e4d4485037a2b06" protocol=ttrpc version=3 Apr 16 03:56:45.855109 sshd[3729]: Connection closed by 10.0.0.1 port 38362 Apr 16 03:56:45.874812 sshd-session[3725]: pam_unix(sshd:session): session closed for user core Apr 16 03:56:46.226984 systemd[1]: sshd@9-10.0.0.115:22-10.0.0.1:38362.service: Deactivated successfully. Apr 16 03:56:46.609406 systemd[1]: session-10.scope: Deactivated successfully. Apr 16 03:56:46.648762 systemd[1]: session-10.scope: Consumed 4.263s CPU time, 16M memory peak. Apr 16 03:56:46.863892 kubelet[2980]: E0416 03:56:46.858261 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.075s" Apr 16 03:56:46.896564 systemd-logind[1549]: Session 10 logged out. Waiting for processes to exit. Apr 16 03:56:47.415880 systemd-logind[1549]: Removed session 10. Apr 16 03:56:47.560743 kubelet[2980]: E0416 03:56:47.559237 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:56:47.836021 systemd[1]: Started cri-containerd-6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf.scope - libcontainer container 6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf. Apr 16 03:56:48.224544 kubelet[2980]: E0416 03:56:48.135073 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.216s" Apr 16 03:56:50.287680 containerd[1575]: time="2026-04-16T03:56:50.277671944Z" level=error msg="get state for 6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf" error="context deadline exceeded" Apr 16 03:56:50.287680 containerd[1575]: time="2026-04-16T03:56:50.288385034Z" level=warning msg="unknown status" status=0 Apr 16 03:56:51.752070 systemd[1]: Started sshd@10-10.0.0.115:22-10.0.0.1:58580.service - OpenSSH per-connection server daemon (10.0.0.1:58580). Apr 16 03:56:52.671273 kubelet[2980]: E0416 03:56:52.670536 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:56:52.910198 containerd[1575]: time="2026-04-16T03:56:52.906171154Z" level=error msg="get state for 6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf" error="context deadline exceeded" Apr 16 03:56:52.910198 containerd[1575]: time="2026-04-16T03:56:52.909487634Z" level=warning msg="unknown status" status=0 Apr 16 03:56:53.881760 kubelet[2980]: E0416 03:56:53.881700 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.302s" Apr 16 03:56:54.478032 containerd[1575]: time="2026-04-16T03:56:54.475590424Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 16 03:56:54.478032 containerd[1575]: time="2026-04-16T03:56:54.475927718Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 16 03:56:54.622321 sshd[3767]: Accepted publickey for core from 10.0.0.1 port 58580 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:56:54.810777 containerd[1575]: time="2026-04-16T03:56:54.779024262Z" level=warning msg="container event discarded" container=02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81 type=CONTAINER_CREATED_EVENT Apr 16 03:56:54.810777 containerd[1575]: time="2026-04-16T03:56:54.779419417Z" level=warning msg="container event discarded" container=02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81 type=CONTAINER_STARTED_EVENT Apr 16 03:56:54.854175 sshd-session[3767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:56:54.874922 containerd[1575]: time="2026-04-16T03:56:54.874618064Z" level=warning msg="container event discarded" container=6ea89118d7b9def11008b24c28723cda88641d52fcd63e992e9e394404b578a7 type=CONTAINER_CREATED_EVENT Apr 16 03:56:54.874922 containerd[1575]: time="2026-04-16T03:56:54.875030358Z" level=warning msg="container event discarded" container=6ea89118d7b9def11008b24c28723cda88641d52fcd63e992e9e394404b578a7 type=CONTAINER_STARTED_EVENT Apr 16 03:56:54.974601 containerd[1575]: time="2026-04-16T03:56:54.973587511Z" level=warning msg="container event discarded" container=b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f type=CONTAINER_CREATED_EVENT Apr 16 03:56:54.974601 containerd[1575]: time="2026-04-16T03:56:54.973942608Z" level=warning msg="container event discarded" container=b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f type=CONTAINER_STARTED_EVENT Apr 16 03:56:55.051330 containerd[1575]: time="2026-04-16T03:56:55.050531901Z" level=info msg="StartContainer for \"6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf\" returns successfully" Apr 16 03:56:55.100554 systemd-logind[1549]: New session 11 of user core. Apr 16 03:56:55.124496 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 16 03:56:55.192285 containerd[1575]: time="2026-04-16T03:56:55.190082344Z" level=warning msg="container event discarded" container=fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a type=CONTAINER_CREATED_EVENT Apr 16 03:56:55.192737 containerd[1575]: time="2026-04-16T03:56:55.192664213Z" level=warning msg="container event discarded" container=d820b403e7cd7162e9bcf2f3b3499edded1d3f3f4df3ec4a740cd260ca5f3851 type=CONTAINER_CREATED_EVENT Apr 16 03:56:55.265468 containerd[1575]: time="2026-04-16T03:56:55.227075084Z" level=warning msg="container event discarded" container=bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad type=CONTAINER_CREATED_EVENT Apr 16 03:56:55.628193 containerd[1575]: time="2026-04-16T03:56:55.621584319Z" level=warning msg="container event discarded" container=fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a type=CONTAINER_STARTED_EVENT Apr 16 03:56:55.726393 containerd[1575]: time="2026-04-16T03:56:55.672128696Z" level=warning msg="container event discarded" container=d820b403e7cd7162e9bcf2f3b3499edded1d3f3f4df3ec4a740cd260ca5f3851 type=CONTAINER_STARTED_EVENT Apr 16 03:56:55.726393 containerd[1575]: time="2026-04-16T03:56:55.677560048Z" level=warning msg="container event discarded" container=bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad type=CONTAINER_STARTED_EVENT Apr 16 03:56:56.969868 sshd[3782]: Connection closed by 10.0.0.1 port 58580 Apr 16 03:56:57.012201 sshd-session[3767]: pam_unix(sshd:session): session closed for user core Apr 16 03:56:57.368182 systemd[1]: sshd@10-10.0.0.115:22-10.0.0.1:58580.service: Deactivated successfully. Apr 16 03:56:57.444931 systemd[1]: session-11.scope: Deactivated successfully. Apr 16 03:56:57.462385 systemd-logind[1549]: Session 11 logged out. Waiting for processes to exit. Apr 16 03:56:57.582045 systemd-logind[1549]: Removed session 11. Apr 16 03:56:57.745435 kubelet[2980]: E0416 03:56:57.744647 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:57:00.818484 kubelet[2980]: E0416 03:57:00.817710 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:57:02.036449 systemd[1]: Started sshd@11-10.0.0.115:22-10.0.0.1:34524.service - OpenSSH per-connection server daemon (10.0.0.1:34524). Apr 16 03:57:02.522602 sshd[3804]: Accepted publickey for core from 10.0.0.1 port 34524 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:57:02.531636 sshd-session[3804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:57:02.677615 systemd-logind[1549]: New session 12 of user core. Apr 16 03:57:02.719013 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 16 03:57:02.770441 kubelet[2980]: E0416 03:57:02.766980 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:57:05.676734 sshd[3807]: Connection closed by 10.0.0.1 port 34524 Apr 16 03:57:05.757789 sshd-session[3804]: pam_unix(sshd:session): session closed for user core Apr 16 03:57:05.872258 systemd[1]: sshd@11-10.0.0.115:22-10.0.0.1:34524.service: Deactivated successfully. Apr 16 03:57:06.068602 systemd[1]: session-12.scope: Deactivated successfully. Apr 16 03:57:06.103066 systemd[1]: session-12.scope: Consumed 1.288s CPU time, 15.4M memory peak. Apr 16 03:57:06.225785 systemd-logind[1549]: Session 12 logged out. Waiting for processes to exit. Apr 16 03:57:06.316566 systemd-logind[1549]: Removed session 12. Apr 16 03:57:07.777423 kubelet[2980]: E0416 03:57:07.776730 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:57:11.545884 systemd[1]: Started sshd@12-10.0.0.115:22-10.0.0.1:56248.service - OpenSSH per-connection server daemon (10.0.0.1:56248). Apr 16 03:57:13.071064 kubelet[2980]: E0416 03:57:13.070069 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:57:13.847878 kubelet[2980]: E0416 03:57:13.844722 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.271s" Apr 16 03:57:14.841314 sshd[3822]: Accepted publickey for core from 10.0.0.1 port 56248 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:57:14.885758 sshd-session[3822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:57:14.977396 systemd-logind[1549]: New session 13 of user core. Apr 16 03:57:14.991613 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 16 03:57:15.707902 containerd[1575]: time="2026-04-16T03:57:15.707047874Z" level=warning msg="container event discarded" container=82651b60f478dea8943f80e4cc106547c83d0ef9e3101dce4bd763b673a69653 type=CONTAINER_CREATED_EVENT Apr 16 03:57:15.707902 containerd[1575]: time="2026-04-16T03:57:15.707678093Z" level=warning msg="container event discarded" container=82651b60f478dea8943f80e4cc106547c83d0ef9e3101dce4bd763b673a69653 type=CONTAINER_STARTED_EVENT Apr 16 03:57:15.843062 containerd[1575]: time="2026-04-16T03:57:15.836264736Z" level=warning msg="container event discarded" container=0470d6c0fc45d20180cecab748ab479f32d3e60fe166f656c595c7b8b6ef6d9e type=CONTAINER_CREATED_EVENT Apr 16 03:57:16.393476 containerd[1575]: time="2026-04-16T03:57:16.392221973Z" level=warning msg="container event discarded" container=0470d6c0fc45d20180cecab748ab479f32d3e60fe166f656c595c7b8b6ef6d9e type=CONTAINER_STARTED_EVENT Apr 16 03:57:18.264446 kubelet[2980]: E0416 03:57:18.250744 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.663s" Apr 16 03:57:18.308544 kubelet[2980]: E0416 03:57:18.279208 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:57:21.561053 containerd[1575]: time="2026-04-16T03:57:21.558967104Z" level=warning msg="container event discarded" container=c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db type=CONTAINER_CREATED_EVENT Apr 16 03:57:21.561053 containerd[1575]: time="2026-04-16T03:57:21.560714403Z" level=warning msg="container event discarded" container=c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db type=CONTAINER_STARTED_EVENT Apr 16 03:57:21.932649 sshd[3827]: Connection closed by 10.0.0.1 port 56248 Apr 16 03:57:21.972008 sshd-session[3822]: pam_unix(sshd:session): session closed for user core Apr 16 03:57:22.248469 systemd[1]: sshd@12-10.0.0.115:22-10.0.0.1:56248.service: Deactivated successfully. Apr 16 03:57:22.387588 systemd[1]: sshd@12-10.0.0.115:22-10.0.0.1:56248.service: Consumed 1.026s CPU time, 3.3M memory peak. Apr 16 03:57:22.414878 systemd[1]: session-13.scope: Deactivated successfully. Apr 16 03:57:22.430231 systemd[1]: session-13.scope: Consumed 2.967s CPU time, 14.1M memory peak. Apr 16 03:57:22.511614 systemd-logind[1549]: Session 13 logged out. Waiting for processes to exit. Apr 16 03:57:22.732399 systemd-logind[1549]: Removed session 13. Apr 16 03:57:23.348831 kubelet[2980]: E0416 03:57:23.348007 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:57:26.747999 kubelet[2980]: E0416 03:57:26.747501 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:57:27.248144 systemd[1]: Started sshd@13-10.0.0.115:22-10.0.0.1:52332.service - OpenSSH per-connection server daemon (10.0.0.1:52332). Apr 16 03:57:28.682992 kubelet[2980]: E0416 03:57:28.658557 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:57:30.849190 kubelet[2980]: E0416 03:57:30.843071 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.189s" Apr 16 03:57:31.341415 sshd[3845]: Accepted publickey for core from 10.0.0.1 port 52332 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:57:31.381829 sshd-session[3845]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:57:31.980556 systemd-logind[1549]: New session 14 of user core. Apr 16 03:57:32.125638 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 16 03:57:34.030746 kubelet[2980]: E0416 03:57:34.020603 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:57:34.981745 kubelet[2980]: E0416 03:57:34.979579 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:57:35.639747 containerd[1575]: time="2026-04-16T03:57:35.637983530Z" level=warning msg="container event discarded" container=7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe type=CONTAINER_CREATED_EVENT Apr 16 03:57:36.165973 containerd[1575]: time="2026-04-16T03:57:36.162398378Z" level=warning msg="container event discarded" container=7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe type=CONTAINER_STARTED_EVENT Apr 16 03:57:39.666567 kubelet[2980]: E0416 03:57:39.663578 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:57:40.008949 kubelet[2980]: E0416 03:57:39.964068 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.379s" Apr 16 03:57:42.771163 kubelet[2980]: E0416 03:57:42.674075 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.106s" Apr 16 03:57:44.082000 kubelet[2980]: E0416 03:57:44.081573 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.238s" Apr 16 03:57:45.155324 kubelet[2980]: E0416 03:57:45.153018 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:57:45.929078 kubelet[2980]: E0416 03:57:45.896617 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.235s" Apr 16 03:57:46.489594 systemd[1]: cri-containerd-6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf.scope: Deactivated successfully. Apr 16 03:57:46.656013 systemd[1]: cri-containerd-6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf.scope: Consumed 5.049s CPU time, 41.6M memory peak. Apr 16 03:57:46.860630 containerd[1575]: time="2026-04-16T03:57:46.846241458Z" level=info msg="received container exit event container_id:\"6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf\" id:\"6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf\" pid:3759 exit_status:1 exited_at:{seconds:1776311866 nanos:495286160}" Apr 16 03:57:47.086893 systemd[1]: cri-containerd-35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9.scope: Deactivated successfully. Apr 16 03:57:47.230289 systemd[1]: cri-containerd-35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9.scope: Consumed 41.993s CPU time, 49M memory peak, 4K read from disk. Apr 16 03:57:47.635227 containerd[1575]: time="2026-04-16T03:57:47.559575865Z" level=info msg="received container exit event container_id:\"35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9\" id:\"35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9\" pid:3629 exit_status:1 exited_at:{seconds:1776311867 nanos:431416907}" Apr 16 03:57:49.452052 kubelet[2980]: E0416 03:57:49.423829 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.848s" Apr 16 03:57:50.929177 kubelet[2980]: E0416 03:57:50.719977 2980 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 16 03:57:51.393014 kubelet[2980]: E0416 03:57:51.370945 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:57:52.548593 sshd[3850]: Connection closed by 10.0.0.1 port 52332 Apr 16 03:57:52.738400 sshd-session[3845]: pam_unix(sshd:session): session closed for user core Apr 16 03:57:53.168477 systemd[1]: sshd@13-10.0.0.115:22-10.0.0.1:52332.service: Deactivated successfully. Apr 16 03:57:53.169262 systemd[1]: sshd@13-10.0.0.115:22-10.0.0.1:52332.service: Consumed 1.187s CPU time, 3.5M memory peak. Apr 16 03:57:53.187043 systemd[1]: session-14.scope: Deactivated successfully. Apr 16 03:57:53.193353 systemd[1]: session-14.scope: Consumed 4.586s CPU time, 14.2M memory peak. Apr 16 03:57:53.248018 systemd-logind[1549]: Session 14 logged out. Waiting for processes to exit. Apr 16 03:57:53.297646 systemd-logind[1549]: Removed session 14. Apr 16 03:57:53.379365 kubelet[2980]: E0416 03:57:53.355928 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.701s" Apr 16 03:57:53.440359 kubelet[2980]: E0416 03:57:53.384730 2980 controller.go:251] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 16 03:57:53.509448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9-rootfs.mount: Deactivated successfully. Apr 16 03:57:53.624585 systemd[1]: cri-containerd-c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd.scope: Deactivated successfully. Apr 16 03:57:53.632001 systemd[1]: cri-containerd-c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd.scope: Consumed 33.949s CPU time, 20.8M memory peak. Apr 16 03:57:53.638576 containerd[1575]: time="2026-04-16T03:57:53.638489868Z" level=info msg="received container exit event container_id:\"c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd\" id:\"c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd\" pid:3503 exit_status:1 exited_at:{seconds:1776311873 nanos:636303109}" Apr 16 03:57:53.660314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf-rootfs.mount: Deactivated successfully. Apr 16 03:57:55.978426 kubelet[2980]: E0416 03:57:55.936528 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.327s" Apr 16 03:57:56.514917 kubelet[2980]: I0416 03:57:56.037041 2980 scope.go:122] "RemoveContainer" containerID="a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63" Apr 16 03:57:56.999051 kubelet[2980]: I0416 03:57:56.994502 2980 scope.go:122] "RemoveContainer" containerID="35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9" Apr 16 03:57:57.304275 kubelet[2980]: E0416 03:57:57.302812 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:57:57.304275 kubelet[2980]: E0416 03:57:57.303247 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 03:57:57.304275 kubelet[2980]: E0416 03:57:56.825461 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:57:57.431915 kubelet[2980]: E0416 03:57:57.428604 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:57:57.475862 kubelet[2980]: E0416 03:57:57.474685 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:57:57.558901 kubelet[2980]: I0416 03:57:57.548603 2980 scope.go:122] "RemoveContainer" containerID="6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf" Apr 16 03:57:57.574015 containerd[1575]: time="2026-04-16T03:57:57.552674400Z" level=info msg="RemoveContainer for \"a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63\"" Apr 16 03:57:58.200975 kubelet[2980]: E0416 03:57:58.056907 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-mwc4j_tigera-operator(1fd5a14c-9f90-43e3-abf1-9685462b990b)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-mwc4j" podUID="1fd5a14c-9f90-43e3-abf1-9685462b990b" Apr 16 03:57:58.536130 systemd[1]: Started sshd@14-10.0.0.115:22-10.0.0.1:50988.service - OpenSSH per-connection server daemon (10.0.0.1:50988). Apr 16 03:57:59.690772 kubelet[2980]: E0416 03:57:59.686693 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.87s" Apr 16 03:58:01.191956 containerd[1575]: time="2026-04-16T03:58:01.188835974Z" level=info msg="RemoveContainer for \"a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63\" returns successfully" Apr 16 03:58:01.297205 kubelet[2980]: I0416 03:58:01.296805 2980 scope.go:122] "RemoveContainer" containerID="35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9" Apr 16 03:58:01.664867 kubelet[2980]: E0416 03:58:01.651032 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:58:01.910989 kubelet[2980]: E0416 03:58:01.900503 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 03:58:02.558608 kubelet[2980]: I0416 03:58:02.536831 2980 scope.go:122] "RemoveContainer" containerID="3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4" Apr 16 03:58:02.721302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd-rootfs.mount: Deactivated successfully. Apr 16 03:58:03.346266 kubelet[2980]: E0416 03:58:03.345668 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:58:05.838051 kubelet[2980]: E0416 03:58:05.836204 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.426s" Apr 16 03:58:06.063578 containerd[1575]: time="2026-04-16T03:58:05.943290868Z" level=info msg="RemoveContainer for \"3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4\"" Apr 16 03:58:07.178958 containerd[1575]: time="2026-04-16T03:58:07.171994020Z" level=info msg="RemoveContainer for \"3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4\" returns successfully" Apr 16 03:58:09.471873 kubelet[2980]: E0416 03:58:09.452347 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:58:10.595599 kubelet[2980]: E0416 03:58:10.595105 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.209s" Apr 16 03:58:10.834736 kubelet[2980]: I0416 03:58:10.831369 2980 scope.go:122] "RemoveContainer" containerID="35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9" Apr 16 03:58:10.983473 kubelet[2980]: E0416 03:58:10.936680 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:58:11.314783 kubelet[2980]: I0416 03:58:11.092005 2980 scope.go:122] "RemoveContainer" containerID="6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf" Apr 16 03:58:11.629047 kubelet[2980]: E0416 03:58:11.621215 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.023s" Apr 16 03:58:11.629047 kubelet[2980]: I0416 03:58:11.627016 2980 scope.go:122] "RemoveContainer" containerID="bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad" Apr 16 03:58:12.270117 kubelet[2980]: I0416 03:58:12.173561 2980 scope.go:122] "RemoveContainer" containerID="c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd" Apr 16 03:58:12.328605 sshd[3912]: Accepted publickey for core from 10.0.0.1 port 50988 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:58:12.671953 kubelet[2980]: E0416 03:58:12.513487 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:58:12.855599 sshd-session[3912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:58:13.019243 containerd[1575]: time="2026-04-16T03:58:13.013730353Z" level=info msg="RemoveContainer for \"bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad\"" Apr 16 03:58:13.047987 containerd[1575]: time="2026-04-16T03:58:13.047803146Z" level=info msg="CreateContainer within sandbox \"02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:3,}" Apr 16 03:58:14.487026 containerd[1575]: time="2026-04-16T03:58:14.483486854Z" level=info msg="CreateContainer within sandbox \"c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db\" for container &ContainerMetadata{Name:tigera-operator,Attempt:3,}" Apr 16 03:58:14.757405 systemd-logind[1549]: New session 15 of user core. Apr 16 03:58:14.957952 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 16 03:58:15.701551 containerd[1575]: time="2026-04-16T03:58:15.643662992Z" level=info msg="RemoveContainer for \"bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad\" returns successfully" Apr 16 03:58:15.645676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount655911329.mount: Deactivated successfully. Apr 16 03:58:15.978019 kubelet[2980]: E0416 03:58:15.970773 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:58:16.224525 containerd[1575]: time="2026-04-16T03:58:16.223965164Z" level=info msg="Container 7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691: CDI devices from CRI Config.CDIDevices: []" Apr 16 03:58:16.279313 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount917671303.mount: Deactivated successfully. Apr 16 03:58:16.328016 containerd[1575]: time="2026-04-16T03:58:16.323984399Z" level=info msg="CreateContainer within sandbox \"b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:2,}" Apr 16 03:58:16.404192 containerd[1575]: time="2026-04-16T03:58:16.402694696Z" level=info msg="Container 7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8: CDI devices from CRI Config.CDIDevices: []" Apr 16 03:58:16.403660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1112283146.mount: Deactivated successfully. Apr 16 03:58:16.552636 kubelet[2980]: E0416 03:58:16.551990 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.836s" Apr 16 03:58:16.553910 containerd[1575]: time="2026-04-16T03:58:16.553760784Z" level=info msg="CreateContainer within sandbox \"02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:3,} returns container id \"7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691\"" Apr 16 03:58:16.658902 containerd[1575]: time="2026-04-16T03:58:16.644031409Z" level=info msg="CreateContainer within sandbox \"c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db\" for &ContainerMetadata{Name:tigera-operator,Attempt:3,} returns container id \"7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8\"" Apr 16 03:58:16.684264 containerd[1575]: time="2026-04-16T03:58:16.678690637Z" level=info msg="StartContainer for \"7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691\"" Apr 16 03:58:16.707448 containerd[1575]: time="2026-04-16T03:58:16.698858297Z" level=info msg="Container 8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563: CDI devices from CRI Config.CDIDevices: []" Apr 16 03:58:16.720375 containerd[1575]: time="2026-04-16T03:58:16.699592257Z" level=info msg="StartContainer for \"7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8\"" Apr 16 03:58:16.720375 containerd[1575]: time="2026-04-16T03:58:16.704235865Z" level=info msg="connecting to shim 7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691" address="unix:///run/containerd/s/b0f2c5cfffdebf676e7ed85c3328df6a87775c2b04620a5f0b47a494ee449f34" protocol=ttrpc version=3 Apr 16 03:58:16.735143 containerd[1575]: time="2026-04-16T03:58:16.734292498Z" level=info msg="connecting to shim 7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8" address="unix:///run/containerd/s/b40817c4c3b3e5498badbc035a393ccaaa43aaaa06e8111d2e4d4485037a2b06" protocol=ttrpc version=3 Apr 16 03:58:17.196075 containerd[1575]: time="2026-04-16T03:58:17.195959551Z" level=info msg="CreateContainer within sandbox \"b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:2,} returns container id \"8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563\"" Apr 16 03:58:17.229813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2291657367.mount: Deactivated successfully. Apr 16 03:58:17.345141 containerd[1575]: time="2026-04-16T03:58:17.344847121Z" level=info msg="StartContainer for \"8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563\"" Apr 16 03:58:17.417244 containerd[1575]: time="2026-04-16T03:58:17.415571051Z" level=info msg="connecting to shim 8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563" address="unix:///run/containerd/s/64fc1d346c666396b4a6f4eda52f8f58d8abeacdc8da519fac54d1b45f3029a3" protocol=ttrpc version=3 Apr 16 03:58:17.466345 systemd[1]: Started cri-containerd-7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8.scope - libcontainer container 7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8. Apr 16 03:58:17.671296 systemd[1]: Started cri-containerd-7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691.scope - libcontainer container 7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691. Apr 16 03:58:18.559489 systemd[1]: Started cri-containerd-8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563.scope - libcontainer container 8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563. Apr 16 03:58:18.689811 sshd[3917]: Connection closed by 10.0.0.1 port 50988 Apr 16 03:58:18.697322 sshd-session[3912]: pam_unix(sshd:session): session closed for user core Apr 16 03:58:18.756429 systemd[1]: sshd@14-10.0.0.115:22-10.0.0.1:50988.service: Deactivated successfully. Apr 16 03:58:18.764048 systemd[1]: sshd@14-10.0.0.115:22-10.0.0.1:50988.service: Consumed 3.779s CPU time, 3.5M memory peak. Apr 16 03:58:18.889192 systemd[1]: session-15.scope: Deactivated successfully. Apr 16 03:58:19.056144 systemd-logind[1549]: Session 15 logged out. Waiting for processes to exit. Apr 16 03:58:19.058059 systemd-logind[1549]: Removed session 15. Apr 16 03:58:19.721035 containerd[1575]: time="2026-04-16T03:58:19.713700235Z" level=info msg="StartContainer for \"7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691\" returns successfully" Apr 16 03:58:20.379539 containerd[1575]: time="2026-04-16T03:58:20.377582856Z" level=info msg="StartContainer for \"7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8\" returns successfully" Apr 16 03:58:20.402848 containerd[1575]: time="2026-04-16T03:58:20.396260213Z" level=info msg="StartContainer for \"8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563\" returns successfully" Apr 16 03:58:20.501456 kubelet[2980]: E0416 03:58:20.500784 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:58:21.129640 kubelet[2980]: E0416 03:58:21.129463 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:58:21.612732 kubelet[2980]: E0416 03:58:21.612599 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:58:21.612732 kubelet[2980]: E0416 03:58:21.612724 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:58:22.653297 kubelet[2980]: E0416 03:58:22.652515 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:58:23.709416 kubelet[2980]: E0416 03:58:23.707871 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:58:23.994834 systemd[1]: Started sshd@15-10.0.0.115:22-10.0.0.1:52582.service - OpenSSH per-connection server daemon (10.0.0.1:52582). Apr 16 03:58:25.661337 kubelet[2980]: E0416 03:58:25.659546 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:58:26.345472 kubelet[2980]: E0416 03:58:26.344437 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:58:27.263629 sshd[4039]: Accepted publickey for core from 10.0.0.1 port 52582 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:58:27.376276 sshd-session[4039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:58:29.061904 systemd-logind[1549]: New session 16 of user core. Apr 16 03:58:29.241755 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 16 03:58:29.670778 kubelet[2980]: E0416 03:58:29.236060 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.612s" Apr 16 03:58:32.483276 kubelet[2980]: E0416 03:58:32.368746 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:58:32.853908 kubelet[2980]: E0416 03:58:32.849940 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.369s" Apr 16 03:58:33.452018 kubelet[2980]: E0416 03:58:33.414298 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:58:35.767558 kubelet[2980]: E0416 03:58:35.765875 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.187s" Apr 16 03:58:36.734896 containerd[1575]: time="2026-04-16T03:58:36.734287049Z" level=warning msg="container event discarded" container=fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a type=CONTAINER_STOPPED_EVENT Apr 16 03:58:36.856283 kubelet[2980]: E0416 03:58:36.853914 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:58:37.185551 kubelet[2980]: E0416 03:58:37.185433 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:58:37.330273 containerd[1575]: time="2026-04-16T03:58:37.329319809Z" level=warning msg="container event discarded" container=a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63 type=CONTAINER_CREATED_EVENT Apr 16 03:58:37.372294 containerd[1575]: time="2026-04-16T03:58:37.371669184Z" level=warning msg="container event discarded" container=bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad type=CONTAINER_STOPPED_EVENT Apr 16 03:58:37.526899 sshd[4042]: Connection closed by 10.0.0.1 port 52582 Apr 16 03:58:37.549936 sshd-session[4039]: pam_unix(sshd:session): session closed for user core Apr 16 03:58:37.592937 kubelet[2980]: E0416 03:58:37.592748 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:58:37.615305 systemd[1]: sshd@15-10.0.0.115:22-10.0.0.1:52582.service: Deactivated successfully. Apr 16 03:58:37.662694 systemd[1]: session-16.scope: Deactivated successfully. Apr 16 03:58:37.685292 systemd[1]: session-16.scope: Consumed 3.368s CPU time, 15.5M memory peak. Apr 16 03:58:37.737194 systemd-logind[1549]: Session 16 logged out. Waiting for processes to exit. Apr 16 03:58:37.739754 systemd-logind[1549]: Removed session 16. Apr 16 03:58:38.024426 kubelet[2980]: E0416 03:58:38.022708 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:58:38.373413 containerd[1575]: time="2026-04-16T03:58:38.371825570Z" level=warning msg="container event discarded" container=a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63 type=CONTAINER_STARTED_EVENT Apr 16 03:58:38.497343 containerd[1575]: time="2026-04-16T03:58:38.492525606Z" level=warning msg="container event discarded" container=c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd type=CONTAINER_CREATED_EVENT Apr 16 03:58:42.495365 containerd[1575]: time="2026-04-16T03:58:42.486415789Z" level=warning msg="container event discarded" container=c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd type=CONTAINER_STARTED_EVENT Apr 16 03:58:42.698475 kubelet[2980]: E0416 03:58:42.684076 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:58:42.843196 systemd[1]: Started sshd@16-10.0.0.115:22-10.0.0.1:47052.service - OpenSSH per-connection server daemon (10.0.0.1:47052). Apr 16 03:58:43.923180 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 47052 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:58:43.922659 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:58:44.024413 systemd-logind[1549]: New session 17 of user core. Apr 16 03:58:44.054810 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 16 03:58:45.639585 sshd[4062]: Connection closed by 10.0.0.1 port 47052 Apr 16 03:58:45.641146 sshd-session[4059]: pam_unix(sshd:session): session closed for user core Apr 16 03:58:45.813124 systemd[1]: sshd@16-10.0.0.115:22-10.0.0.1:47052.service: Deactivated successfully. Apr 16 03:58:45.975636 systemd[1]: session-17.scope: Deactivated successfully. Apr 16 03:58:46.045605 systemd-logind[1549]: Session 17 logged out. Waiting for processes to exit. Apr 16 03:58:46.109233 systemd-logind[1549]: Removed session 17. Apr 16 03:58:47.799434 kubelet[2980]: E0416 03:58:47.793784 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:58:51.031919 systemd[1]: Started sshd@17-10.0.0.115:22-10.0.0.1:33434.service - OpenSSH per-connection server daemon (10.0.0.1:33434). Apr 16 03:58:51.640331 sshd[4078]: Accepted publickey for core from 10.0.0.1 port 33434 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:58:51.748285 sshd-session[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:58:51.892787 systemd-logind[1549]: New session 18 of user core. Apr 16 03:58:51.952250 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 16 03:58:52.853932 kubelet[2980]: E0416 03:58:52.848754 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:58:53.681149 sshd[4083]: Connection closed by 10.0.0.1 port 33434 Apr 16 03:58:53.712026 sshd-session[4078]: pam_unix(sshd:session): session closed for user core Apr 16 03:58:53.853126 systemd[1]: sshd@17-10.0.0.115:22-10.0.0.1:33434.service: Deactivated successfully. Apr 16 03:58:53.917232 systemd[1]: session-18.scope: Deactivated successfully. Apr 16 03:58:54.004152 systemd-logind[1549]: Session 18 logged out. Waiting for processes to exit. Apr 16 03:58:54.061890 systemd-logind[1549]: Removed session 18. Apr 16 03:58:57.880170 kubelet[2980]: E0416 03:58:57.869356 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:58:59.097395 systemd[1]: Started sshd@18-10.0.0.115:22-10.0.0.1:57560.service - OpenSSH per-connection server daemon (10.0.0.1:57560). Apr 16 03:59:00.819913 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 57560 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:59:00.842337 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:59:01.005671 systemd-logind[1549]: New session 19 of user core. Apr 16 03:59:01.042496 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 16 03:59:03.098864 kubelet[2980]: E0416 03:59:03.083516 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:59:03.890158 sshd[4101]: Connection closed by 10.0.0.1 port 57560 Apr 16 03:59:03.900376 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Apr 16 03:59:04.003564 systemd[1]: sshd@18-10.0.0.115:22-10.0.0.1:57560.service: Deactivated successfully. Apr 16 03:59:04.016851 systemd[1]: session-19.scope: Deactivated successfully. Apr 16 03:59:04.054763 systemd-logind[1549]: Session 19 logged out. Waiting for processes to exit. Apr 16 03:59:04.140545 systemd-logind[1549]: Removed session 19. Apr 16 03:59:08.253172 kubelet[2980]: E0416 03:59:08.251384 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:59:09.259288 systemd[1]: Started sshd@19-10.0.0.115:22-10.0.0.1:36124.service - OpenSSH per-connection server daemon (10.0.0.1:36124). Apr 16 03:59:10.142280 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 36124 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:59:10.148760 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:59:10.669945 systemd-logind[1549]: New session 20 of user core. Apr 16 03:59:10.924573 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 16 03:59:12.947481 sshd[4119]: Connection closed by 10.0.0.1 port 36124 Apr 16 03:59:12.948363 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Apr 16 03:59:12.979061 systemd[1]: sshd@19-10.0.0.115:22-10.0.0.1:36124.service: Deactivated successfully. Apr 16 03:59:13.012183 systemd[1]: session-20.scope: Deactivated successfully. Apr 16 03:59:13.014181 systemd-logind[1549]: Session 20 logged out. Waiting for processes to exit. Apr 16 03:59:13.016160 systemd-logind[1549]: Removed session 20. Apr 16 03:59:13.302790 kubelet[2980]: E0416 03:59:13.296417 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:59:13.602312 kubelet[2980]: E0416 03:59:13.594147 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:59:14.265979 containerd[1575]: time="2026-04-16T03:59:14.264906651Z" level=warning msg="container event discarded" container=7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe type=CONTAINER_STOPPED_EVENT Apr 16 03:59:14.265979 containerd[1575]: time="2026-04-16T03:59:14.265607896Z" level=warning msg="container event discarded" container=a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63 type=CONTAINER_STOPPED_EVENT Apr 16 03:59:15.117142 containerd[1575]: time="2026-04-16T03:59:15.037601313Z" level=warning msg="container event discarded" container=fe76b5620942308101f560f938403c083a1c250a6aede759020bd631b526994a type=CONTAINER_DELETED_EVENT Apr 16 03:59:15.701922 containerd[1575]: time="2026-04-16T03:59:15.700307318Z" level=warning msg="container event discarded" container=3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4 type=CONTAINER_CREATED_EVENT Apr 16 03:59:18.323337 systemd[1]: Started sshd@20-10.0.0.115:22-10.0.0.1:35472.service - OpenSSH per-connection server daemon (10.0.0.1:35472). Apr 16 03:59:18.421533 kubelet[2980]: E0416 03:59:18.412478 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:59:19.505800 sshd[4139]: Accepted publickey for core from 10.0.0.1 port 35472 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:59:19.595460 sshd-session[4139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:59:19.924935 systemd-logind[1549]: New session 21 of user core. Apr 16 03:59:20.228988 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 16 03:59:22.102261 sshd[4143]: Connection closed by 10.0.0.1 port 35472 Apr 16 03:59:22.153938 sshd-session[4139]: pam_unix(sshd:session): session closed for user core Apr 16 03:59:22.612842 systemd[1]: sshd@20-10.0.0.115:22-10.0.0.1:35472.service: Deactivated successfully. Apr 16 03:59:22.813317 systemd[1]: session-21.scope: Deactivated successfully. Apr 16 03:59:22.866116 systemd-logind[1549]: Session 21 logged out. Waiting for processes to exit. Apr 16 03:59:22.929956 systemd-logind[1549]: Removed session 21. Apr 16 03:59:23.479192 kubelet[2980]: E0416 03:59:23.478791 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:59:26.448707 containerd[1575]: time="2026-04-16T03:59:26.437635890Z" level=warning msg="container event discarded" container=7577fbdc7c2a98aa2a5f1a8e64c2a75eb54af6c25249a67a80d36b81a9f3bcfe type=CONTAINER_DELETED_EVENT Apr 16 03:59:27.826290 containerd[1575]: time="2026-04-16T03:59:27.821732236Z" level=warning msg="container event discarded" container=35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9 type=CONTAINER_CREATED_EVENT Apr 16 03:59:28.146077 systemd[1]: Started sshd@21-10.0.0.115:22-10.0.0.1:34794.service - OpenSSH per-connection server daemon (10.0.0.1:34794). Apr 16 03:59:28.625875 containerd[1575]: time="2026-04-16T03:59:28.350266662Z" level=warning msg="container event discarded" container=3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4 type=CONTAINER_STARTED_EVENT Apr 16 03:59:28.797058 kubelet[2980]: E0416 03:59:28.795555 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:59:29.775903 kubelet[2980]: E0416 03:59:29.774820 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.145s" Apr 16 03:59:32.166667 kubelet[2980]: E0416 03:59:32.050878 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.431s" Apr 16 03:59:35.565257 containerd[1575]: time="2026-04-16T03:59:35.556704677Z" level=warning msg="container event discarded" container=35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9 type=CONTAINER_STARTED_EVENT Apr 16 03:59:36.730216 kubelet[2980]: E0416 03:59:36.228651 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:59:39.545285 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 34794 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 03:59:40.018190 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 03:59:40.518512 kubelet[2980]: E0416 03:59:40.448996 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.752s" Apr 16 03:59:41.041357 systemd-logind[1549]: New session 22 of user core. Apr 16 03:59:41.332834 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 16 03:59:41.857412 kubelet[2980]: E0416 03:59:41.842603 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.383s" Apr 16 03:59:42.098847 kubelet[2980]: E0416 03:59:42.097165 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:59:43.051203 kubelet[2980]: E0416 03:59:43.048821 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.024s" Apr 16 03:59:44.467249 kubelet[2980]: E0416 03:59:44.466607 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.27s" Apr 16 03:59:44.653811 kubelet[2980]: E0416 03:59:44.653706 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:59:46.646711 systemd[1]: cri-containerd-7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8.scope: Deactivated successfully. Apr 16 03:59:46.789061 systemd[1]: cri-containerd-7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8.scope: Consumed 11.583s CPU time, 56.8M memory peak. Apr 16 03:59:46.938561 containerd[1575]: time="2026-04-16T03:59:46.847928428Z" level=info msg="received container exit event container_id:\"7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8\" id:\"7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8\" pid:3965 exit_status:1 exited_at:{seconds:1776311986 nanos:693929167}" Apr 16 03:59:47.470706 kubelet[2980]: E0416 03:59:47.464673 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:59:48.771669 kubelet[2980]: E0416 03:59:48.766854 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.104s" Apr 16 03:59:49.969434 kubelet[2980]: E0416 03:59:49.964801 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 03:59:52.980271 kubelet[2980]: E0416 03:59:52.960626 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:59:53.497577 kubelet[2980]: E0416 03:59:53.437263 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.609s" Apr 16 03:59:54.696832 kubelet[2980]: E0416 03:59:54.679328 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.233s" Apr 16 03:59:54.754708 systemd[1]: cri-containerd-7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691.scope: Deactivated successfully. Apr 16 03:59:54.758986 systemd[1]: cri-containerd-7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691.scope: Consumed 13.102s CPU time, 49.4M memory peak. Apr 16 03:59:55.075995 sshd[4163]: Connection closed by 10.0.0.1 port 34794 Apr 16 03:59:55.111769 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Apr 16 03:59:55.359718 containerd[1575]: time="2026-04-16T03:59:55.343869792Z" level=info msg="received container exit event container_id:\"7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691\" id:\"7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691\" pid:3964 exit_status:1 exited_at:{seconds:1776311995 nanos:328603999}" Apr 16 03:59:55.367397 systemd[1]: sshd@21-10.0.0.115:22-10.0.0.1:34794.service: Deactivated successfully. Apr 16 03:59:55.474714 systemd[1]: sshd@21-10.0.0.115:22-10.0.0.1:34794.service: Consumed 3.902s CPU time, 3.2M memory peak. Apr 16 03:59:55.830851 systemd[1]: session-22.scope: Deactivated successfully. Apr 16 03:59:55.971411 systemd[1]: session-22.scope: Consumed 5.735s CPU time, 15.3M memory peak. Apr 16 03:59:56.314961 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8-rootfs.mount: Deactivated successfully. Apr 16 03:59:56.893813 systemd-logind[1549]: Session 22 logged out. Waiting for processes to exit. Apr 16 03:59:57.448549 systemd-logind[1549]: Removed session 22. Apr 16 03:59:58.811279 kubelet[2980]: E0416 03:59:58.793864 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 03:59:59.312219 kubelet[2980]: E0416 03:59:59.311941 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.615s" Apr 16 03:59:59.886563 kubelet[2980]: E0416 03:59:59.885055 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:00:00.190346 kubelet[2980]: I0416 04:00:00.187435 2980 scope.go:122] "RemoveContainer" containerID="6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf" Apr 16 04:00:00.540459 systemd[1]: Started sshd@22-10.0.0.115:22-10.0.0.1:37828.service - OpenSSH per-connection server daemon (10.0.0.1:37828). Apr 16 04:00:00.696821 kubelet[2980]: I0416 04:00:00.669158 2980 scope.go:122] "RemoveContainer" containerID="7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8" Apr 16 04:00:00.696821 kubelet[2980]: E0416 04:00:00.669425 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-mwc4j_tigera-operator(1fd5a14c-9f90-43e3-abf1-9685462b990b)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-mwc4j" podUID="1fd5a14c-9f90-43e3-abf1-9685462b990b" Apr 16 04:00:01.230150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691-rootfs.mount: Deactivated successfully. Apr 16 04:00:01.248238 containerd[1575]: time="2026-04-16T04:00:01.248047736Z" level=info msg="RemoveContainer for \"6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf\"" Apr 16 04:00:01.407203 containerd[1575]: time="2026-04-16T04:00:01.407031288Z" level=info msg="RemoveContainer for \"6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf\" returns successfully" Apr 16 04:00:03.556874 kubelet[2980]: I0416 04:00:03.555689 2980 scope.go:122] "RemoveContainer" containerID="35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9" Apr 16 04:00:03.576194 kubelet[2980]: I0416 04:00:03.561845 2980 scope.go:122] "RemoveContainer" containerID="7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691" Apr 16 04:00:03.728800 kubelet[2980]: E0416 04:00:03.662027 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:00:03.762934 kubelet[2980]: E0416 04:00:03.757958 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 04:00:04.056577 kubelet[2980]: E0416 04:00:04.050308 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:00:04.208646 containerd[1575]: time="2026-04-16T04:00:04.199339342Z" level=info msg="RemoveContainer for \"35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9\"" Apr 16 04:00:04.605573 sshd[4211]: Accepted publickey for core from 10.0.0.1 port 37828 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:00:04.741520 sshd-session[4211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:00:04.846691 containerd[1575]: time="2026-04-16T04:00:04.746173277Z" level=info msg="RemoveContainer for \"35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9\" returns successfully" Apr 16 04:00:05.104490 systemd-logind[1549]: New session 23 of user core. Apr 16 04:00:05.195568 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 16 04:00:09.195751 kubelet[2980]: E0416 04:00:09.183763 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:00:09.774813 kubelet[2980]: I0416 04:00:09.773485 2980 scope.go:122] "RemoveContainer" containerID="7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691" Apr 16 04:00:09.879082 kubelet[2980]: E0416 04:00:09.863260 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:00:09.968389 kubelet[2980]: E0416 04:00:09.930908 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 04:00:12.296799 kubelet[2980]: E0416 04:00:12.285692 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.66s" Apr 16 04:00:14.360218 kubelet[2980]: E0416 04:00:14.357603 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.788s" Apr 16 04:00:14.850823 kubelet[2980]: E0416 04:00:14.849040 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:00:15.256761 sshd[4215]: Connection closed by 10.0.0.1 port 37828 Apr 16 04:00:15.448559 sshd-session[4211]: pam_unix(sshd:session): session closed for user core Apr 16 04:00:16.229849 kubelet[2980]: E0416 04:00:16.229807 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.639s" Apr 16 04:00:16.376626 systemd[1]: sshd@22-10.0.0.115:22-10.0.0.1:37828.service: Deactivated successfully. Apr 16 04:00:16.452883 systemd[1]: sshd@22-10.0.0.115:22-10.0.0.1:37828.service: Consumed 1.372s CPU time, 3.4M memory peak. Apr 16 04:00:16.677310 systemd[1]: session-23.scope: Deactivated successfully. Apr 16 04:00:16.757609 systemd[1]: session-23.scope: Consumed 5.114s CPU time, 16.1M memory peak. Apr 16 04:00:16.944377 systemd-logind[1549]: Session 23 logged out. Waiting for processes to exit. Apr 16 04:00:17.197393 systemd-logind[1549]: Removed session 23. Apr 16 04:00:17.738611 kubelet[2980]: E0416 04:00:17.734990 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.052s" Apr 16 04:00:20.163802 kubelet[2980]: E0416 04:00:20.152967 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:00:20.836679 systemd[1]: Started sshd@23-10.0.0.115:22-10.0.0.1:36052.service - OpenSSH per-connection server daemon (10.0.0.1:36052). Apr 16 04:00:21.675723 kubelet[2980]: E0416 04:00:21.670905 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:00:21.957291 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 36052 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:00:22.000819 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:00:22.590737 systemd-logind[1549]: New session 24 of user core. Apr 16 04:00:22.615240 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 16 04:00:24.294446 sshd[4240]: Connection closed by 10.0.0.1 port 36052 Apr 16 04:00:24.352765 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Apr 16 04:00:24.668791 systemd[1]: sshd@23-10.0.0.115:22-10.0.0.1:36052.service: Deactivated successfully. Apr 16 04:00:24.841202 systemd[1]: session-24.scope: Deactivated successfully. Apr 16 04:00:24.903842 systemd-logind[1549]: Session 24 logged out. Waiting for processes to exit. Apr 16 04:00:25.048789 systemd-logind[1549]: Removed session 24. Apr 16 04:00:25.350068 kubelet[2980]: E0416 04:00:25.261845 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:00:27.592508 kubelet[2980]: I0416 04:00:27.591127 2980 scope.go:122] "RemoveContainer" containerID="7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8" Apr 16 04:00:27.750532 containerd[1575]: time="2026-04-16T04:00:27.744178832Z" level=info msg="CreateContainer within sandbox \"c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db\" for container &ContainerMetadata{Name:tigera-operator,Attempt:4,}" Apr 16 04:00:27.793393 systemd[1]: Starting systemd-tmpfiles-clean.service - Cleanup of Temporary Directories... Apr 16 04:00:27.938727 containerd[1575]: time="2026-04-16T04:00:27.860329186Z" level=info msg="Container 7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:00:28.357328 containerd[1575]: time="2026-04-16T04:00:28.341662712Z" level=info msg="CreateContainer within sandbox \"c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db\" for &ContainerMetadata{Name:tigera-operator,Attempt:4,} returns container id \"7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90\"" Apr 16 04:00:28.371731 containerd[1575]: time="2026-04-16T04:00:28.367567540Z" level=info msg="StartContainer for \"7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90\"" Apr 16 04:00:28.568136 containerd[1575]: time="2026-04-16T04:00:28.555773631Z" level=info msg="connecting to shim 7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90" address="unix:///run/containerd/s/b40817c4c3b3e5498badbc035a393ccaaa43aaaa06e8111d2e4d4485037a2b06" protocol=ttrpc version=3 Apr 16 04:00:29.319541 systemd-tmpfiles[4253]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 16 04:00:29.321681 systemd-tmpfiles[4253]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 16 04:00:29.322741 systemd-tmpfiles[4253]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 16 04:00:29.323307 systemd-tmpfiles[4253]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 16 04:00:29.351601 systemd-tmpfiles[4253]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 16 04:00:29.377722 systemd-tmpfiles[4253]: ACLs are not supported, ignoring. Apr 16 04:00:29.378298 systemd-tmpfiles[4253]: ACLs are not supported, ignoring. Apr 16 04:00:29.489739 systemd-tmpfiles[4253]: Detected autofs mount point /boot during canonicalization of boot. Apr 16 04:00:29.490887 systemd-tmpfiles[4253]: Skipping /boot Apr 16 04:00:29.716957 systemd[1]: Started sshd@24-10.0.0.115:22-10.0.0.1:52836.service - OpenSSH per-connection server daemon (10.0.0.1:52836). Apr 16 04:00:29.899724 systemd[1]: Started cri-containerd-7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90.scope - libcontainer container 7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90. Apr 16 04:00:29.927201 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Apr 16 04:00:29.927774 systemd[1]: Finished systemd-tmpfiles-clean.service - Cleanup of Temporary Directories. Apr 16 04:00:29.978017 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Apr 16 04:00:30.393733 kubelet[2980]: E0416 04:00:30.389671 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:00:31.281466 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 52836 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:00:31.285004 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:00:31.349502 systemd-logind[1549]: New session 25 of user core. Apr 16 04:00:31.358368 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 16 04:00:32.710689 containerd[1575]: time="2026-04-16T04:00:32.709576038Z" level=info msg="StartContainer for \"7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90\" returns successfully" Apr 16 04:00:34.542469 kubelet[2980]: E0416 04:00:34.532707 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.961s" Apr 16 04:00:35.662734 kubelet[2980]: E0416 04:00:35.659934 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:00:37.257286 sshd[4286]: Connection closed by 10.0.0.1 port 52836 Apr 16 04:00:37.286751 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Apr 16 04:00:37.418401 systemd[1]: sshd@24-10.0.0.115:22-10.0.0.1:52836.service: Deactivated successfully. Apr 16 04:00:37.472637 systemd[1]: session-25.scope: Deactivated successfully. Apr 16 04:00:37.479497 systemd[1]: session-25.scope: Consumed 2.574s CPU time, 17.7M memory peak. Apr 16 04:00:37.482975 systemd-logind[1549]: Session 25 logged out. Waiting for processes to exit. Apr 16 04:00:37.495781 systemd-logind[1549]: Removed session 25. Apr 16 04:00:37.581341 kubelet[2980]: I0416 04:00:37.580164 2980 scope.go:122] "RemoveContainer" containerID="7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691" Apr 16 04:00:37.581341 kubelet[2980]: E0416 04:00:37.580354 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:00:37.784436 containerd[1575]: time="2026-04-16T04:00:37.780800880Z" level=info msg="CreateContainer within sandbox \"02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:4,}" Apr 16 04:00:37.926295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1226084427.mount: Deactivated successfully. Apr 16 04:00:37.943566 containerd[1575]: time="2026-04-16T04:00:37.943406848Z" level=info msg="Container 3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:00:37.983845 containerd[1575]: time="2026-04-16T04:00:37.982884363Z" level=info msg="CreateContainer within sandbox \"02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:4,} returns container id \"3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb\"" Apr 16 04:00:37.996390 containerd[1575]: time="2026-04-16T04:00:37.992468532Z" level=info msg="StartContainer for \"3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb\"" Apr 16 04:00:38.006171 containerd[1575]: time="2026-04-16T04:00:38.005481605Z" level=info msg="connecting to shim 3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb" address="unix:///run/containerd/s/b0f2c5cfffdebf676e7ed85c3328df6a87775c2b04620a5f0b47a494ee449f34" protocol=ttrpc version=3 Apr 16 04:00:38.405460 systemd[1]: Started cri-containerd-3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb.scope - libcontainer container 3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb. Apr 16 04:00:39.298556 containerd[1575]: time="2026-04-16T04:00:39.296482627Z" level=info msg="StartContainer for \"3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb\" returns successfully" Apr 16 04:00:39.820495 kubelet[2980]: E0416 04:00:39.820365 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:00:40.768991 kubelet[2980]: E0416 04:00:40.768330 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:00:41.171614 kubelet[2980]: E0416 04:00:41.168324 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:00:42.755901 systemd[1]: Started sshd@25-10.0.0.115:22-10.0.0.1:52058.service - OpenSSH per-connection server daemon (10.0.0.1:52058). Apr 16 04:00:45.948301 kubelet[2980]: E0416 04:00:45.942790 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:00:49.090061 sshd[4341]: Accepted publickey for core from 10.0.0.1 port 52058 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:00:49.220543 sshd-session[4341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:00:50.577711 systemd-logind[1549]: New session 26 of user core. Apr 16 04:00:50.797121 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 16 04:00:51.062656 kubelet[2980]: E0416 04:00:51.062113 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:00:51.369773 kubelet[2980]: E0416 04:00:51.365926 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:00:56.189320 kubelet[2980]: E0416 04:00:56.176188 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:00:57.953195 kubelet[2980]: E0416 04:00:57.951255 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.382s" Apr 16 04:00:58.477221 sshd[4347]: Connection closed by 10.0.0.1 port 52058 Apr 16 04:00:58.545364 sshd-session[4341]: pam_unix(sshd:session): session closed for user core Apr 16 04:00:58.664104 kubelet[2980]: E0416 04:00:58.664009 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:00:58.859474 systemd[1]: sshd@25-10.0.0.115:22-10.0.0.1:52058.service: Deactivated successfully. Apr 16 04:00:58.872961 systemd[1]: sshd@25-10.0.0.115:22-10.0.0.1:52058.service: Consumed 2.303s CPU time, 3.2M memory peak. Apr 16 04:00:59.096887 systemd[1]: session-26.scope: Deactivated successfully. Apr 16 04:00:59.098338 systemd[1]: session-26.scope: Consumed 3.862s CPU time, 14.1M memory peak. Apr 16 04:00:59.241830 systemd-logind[1549]: Session 26 logged out. Waiting for processes to exit. Apr 16 04:00:59.587528 systemd-logind[1549]: Removed session 26. Apr 16 04:01:01.464912 kubelet[2980]: E0416 04:01:01.460930 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:01:03.837787 systemd[1]: Started sshd@26-10.0.0.115:22-10.0.0.1:35658.service - OpenSSH per-connection server daemon (10.0.0.1:35658). Apr 16 04:01:04.751069 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 35658 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:01:04.777603 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:01:04.947845 systemd-logind[1549]: New session 27 of user core. Apr 16 04:01:04.996197 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 16 04:01:06.527465 kubelet[2980]: E0416 04:01:06.504038 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:01:08.571431 sshd[4370]: Connection closed by 10.0.0.1 port 35658 Apr 16 04:01:08.711340 sshd-session[4367]: pam_unix(sshd:session): session closed for user core Apr 16 04:01:09.262064 systemd[1]: sshd@26-10.0.0.115:22-10.0.0.1:35658.service: Deactivated successfully. Apr 16 04:01:09.490674 systemd[1]: session-27.scope: Deactivated successfully. Apr 16 04:01:09.513234 systemd[1]: session-27.scope: Consumed 1.777s CPU time, 14.5M memory peak. Apr 16 04:01:09.728512 systemd-logind[1549]: Session 27 logged out. Waiting for processes to exit. Apr 16 04:01:09.926233 systemd-logind[1549]: Removed session 27. Apr 16 04:01:11.564581 kubelet[2980]: E0416 04:01:11.559904 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:01:12.746307 kubelet[2980]: E0416 04:01:12.737859 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:01:14.139240 systemd[1]: Started sshd@27-10.0.0.115:22-10.0.0.1:54768.service - OpenSSH per-connection server daemon (10.0.0.1:54768). Apr 16 04:01:15.388196 sshd[4389]: Accepted publickey for core from 10.0.0.1 port 54768 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:01:15.480832 sshd-session[4389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:01:16.061617 systemd-logind[1549]: New session 28 of user core. Apr 16 04:01:16.226693 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 16 04:01:16.705909 kubelet[2980]: E0416 04:01:16.676040 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:01:20.336375 sshd[4392]: Connection closed by 10.0.0.1 port 54768 Apr 16 04:01:20.355318 sshd-session[4389]: pam_unix(sshd:session): session closed for user core Apr 16 04:01:20.539354 systemd[1]: sshd@27-10.0.0.115:22-10.0.0.1:54768.service: Deactivated successfully. Apr 16 04:01:20.590218 systemd[1]: session-28.scope: Deactivated successfully. Apr 16 04:01:20.590843 systemd[1]: session-28.scope: Consumed 1.930s CPU time, 15.1M memory peak. Apr 16 04:01:20.646901 systemd-logind[1549]: Session 28 logged out. Waiting for processes to exit. Apr 16 04:01:20.683994 systemd-logind[1549]: Removed session 28. Apr 16 04:01:21.849268 kubelet[2980]: E0416 04:01:21.842866 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:01:24.639230 kubelet[2980]: E0416 04:01:24.638480 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:01:26.202856 systemd[1]: Started sshd@28-10.0.0.115:22-10.0.0.1:47508.service - OpenSSH per-connection server daemon (10.0.0.1:47508). Apr 16 04:01:26.909549 kubelet[2980]: E0416 04:01:26.888595 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:01:27.342216 sshd[4409]: Accepted publickey for core from 10.0.0.1 port 47508 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:01:27.345208 sshd-session[4409]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:01:28.322461 systemd-logind[1549]: New session 29 of user core. Apr 16 04:01:28.485255 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 16 04:01:30.268977 kubelet[2980]: E0416 04:01:30.265528 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.693s" Apr 16 04:01:31.945004 kubelet[2980]: E0416 04:01:31.927725 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:01:33.060940 containerd[1575]: time="2026-04-16T04:01:33.052347845Z" level=warning msg="container event discarded" container=3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4 type=CONTAINER_STOPPED_EVENT Apr 16 04:01:37.320892 kubelet[2980]: E0416 04:01:37.294884 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:01:38.697817 sshd[4412]: Connection closed by 10.0.0.1 port 47508 Apr 16 04:01:38.880955 sshd-session[4409]: pam_unix(sshd:session): session closed for user core Apr 16 04:01:39.478154 systemd[1]: sshd@28-10.0.0.115:22-10.0.0.1:47508.service: Deactivated successfully. Apr 16 04:01:39.736720 systemd[1]: session-29.scope: Deactivated successfully. Apr 16 04:01:39.751673 systemd[1]: session-29.scope: Consumed 3.956s CPU time, 13.7M memory peak. Apr 16 04:01:39.935806 systemd-logind[1549]: Session 29 logged out. Waiting for processes to exit. Apr 16 04:01:39.972335 systemd-logind[1549]: Removed session 29. Apr 16 04:01:42.529350 kubelet[2980]: E0416 04:01:42.522079 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:01:43.501437 containerd[1575]: time="2026-04-16T04:01:43.487909877Z" level=warning msg="container event discarded" container=6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf type=CONTAINER_CREATED_EVENT Apr 16 04:01:44.348549 systemd[1]: Started sshd@29-10.0.0.115:22-10.0.0.1:37520.service - OpenSSH per-connection server daemon (10.0.0.1:37520). Apr 16 04:01:47.864176 kubelet[2980]: E0416 04:01:47.857729 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:01:49.439203 sshd[4429]: Accepted publickey for core from 10.0.0.1 port 37520 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:01:50.072036 sshd-session[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:01:51.161350 systemd-logind[1549]: New session 30 of user core. Apr 16 04:01:51.317403 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 16 04:01:52.105368 kubelet[2980]: E0416 04:01:52.097536 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.495s" Apr 16 04:01:53.292986 kubelet[2980]: E0416 04:01:53.291476 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:01:53.750569 kubelet[2980]: E0416 04:01:53.750239 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.131s" Apr 16 04:01:55.108903 containerd[1575]: time="2026-04-16T04:01:55.099749636Z" level=warning msg="container event discarded" container=6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf type=CONTAINER_STARTED_EVENT Apr 16 04:01:55.789911 kubelet[2980]: E0416 04:01:55.778964 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.21s" Apr 16 04:01:58.882286 kubelet[2980]: E0416 04:01:58.784852 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:01:59.819927 kubelet[2980]: E0416 04:01:59.819071 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.245s" Apr 16 04:02:00.958307 kubelet[2980]: E0416 04:02:00.922971 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.053s" Apr 16 04:02:03.951489 kubelet[2980]: E0416 04:02:03.949607 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:02:04.254870 kubelet[2980]: E0416 04:02:03.963524 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:02:04.670363 kubelet[2980]: E0416 04:02:04.668750 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:02:05.111662 sshd[4433]: Connection closed by 10.0.0.1 port 37520 Apr 16 04:02:05.313703 sshd-session[4429]: pam_unix(sshd:session): session closed for user core Apr 16 04:02:06.185055 systemd[1]: sshd@29-10.0.0.115:22-10.0.0.1:37520.service: Deactivated successfully. Apr 16 04:02:06.238452 systemd[1]: sshd@29-10.0.0.115:22-10.0.0.1:37520.service: Consumed 2.392s CPU time, 3.2M memory peak. Apr 16 04:02:06.556781 systemd[1]: session-30.scope: Deactivated successfully. Apr 16 04:02:06.593952 systemd[1]: session-30.scope: Consumed 6.311s CPU time, 13.5M memory peak. Apr 16 04:02:06.743937 systemd-logind[1549]: Session 30 logged out. Waiting for processes to exit. Apr 16 04:02:06.961572 systemd-logind[1549]: Removed session 30. Apr 16 04:02:09.034712 kubelet[2980]: E0416 04:02:09.034547 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:02:10.116060 systemd[1]: Started sshd@30-10.0.0.115:22-10.0.0.1:33628.service - OpenSSH per-connection server daemon (10.0.0.1:33628). Apr 16 04:02:10.726297 sshd[4453]: Accepted publickey for core from 10.0.0.1 port 33628 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:02:10.790536 sshd-session[4453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:02:11.218713 systemd-logind[1549]: New session 31 of user core. Apr 16 04:02:11.272702 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 16 04:02:12.791149 sshd[4459]: Connection closed by 10.0.0.1 port 33628 Apr 16 04:02:12.803340 sshd-session[4453]: pam_unix(sshd:session): session closed for user core Apr 16 04:02:12.946691 systemd[1]: sshd@30-10.0.0.115:22-10.0.0.1:33628.service: Deactivated successfully. Apr 16 04:02:13.001901 systemd[1]: session-31.scope: Deactivated successfully. Apr 16 04:02:13.047554 systemd-logind[1549]: Session 31 logged out. Waiting for processes to exit. Apr 16 04:02:13.050659 systemd-logind[1549]: Removed session 31. Apr 16 04:02:14.193631 kubelet[2980]: E0416 04:02:14.191041 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:02:19.092024 systemd[1]: Started sshd@31-10.0.0.115:22-10.0.0.1:54726.service - OpenSSH per-connection server daemon (10.0.0.1:54726). Apr 16 04:02:19.764287 kubelet[2980]: E0416 04:02:19.754789 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:02:20.845669 kubelet[2980]: E0416 04:02:20.842937 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.253s" Apr 16 04:02:22.150183 kubelet[2980]: E0416 04:02:22.141179 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.273s" Apr 16 04:02:24.978530 kubelet[2980]: E0416 04:02:24.860846 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:02:27.600788 systemd[1]: cri-containerd-7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90.scope: Deactivated successfully. Apr 16 04:02:27.671330 systemd[1]: cri-containerd-7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90.scope: Consumed 11.429s CPU time, 59M memory peak. Apr 16 04:02:27.798979 containerd[1575]: time="2026-04-16T04:02:27.685277508Z" level=info msg="received container exit event container_id:\"7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90\" id:\"7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90\" pid:4272 exit_status:1 exited_at:{seconds:1776312147 nanos:663593810}" Apr 16 04:02:28.178558 sshd[4477]: Accepted publickey for core from 10.0.0.1 port 54726 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:02:28.189895 sshd-session[4477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:02:28.361154 systemd[1]: cri-containerd-3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb.scope: Deactivated successfully. Apr 16 04:02:28.369479 systemd[1]: cri-containerd-3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb.scope: Consumed 21.884s CPU time, 43.2M memory peak. Apr 16 04:02:28.477627 containerd[1575]: time="2026-04-16T04:02:28.471604934Z" level=info msg="received container exit event container_id:\"3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb\" id:\"3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb\" pid:4320 exit_status:1 exited_at:{seconds:1776312148 nanos:387598257}" Apr 16 04:02:28.483542 systemd-logind[1549]: New session 32 of user core. Apr 16 04:02:28.492536 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 16 04:02:28.572703 systemd[1]: cri-containerd-8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563.scope: Deactivated successfully. Apr 16 04:02:28.573193 systemd[1]: cri-containerd-8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563.scope: Consumed 33.805s CPU time, 20.7M memory peak. Apr 16 04:02:28.740984 containerd[1575]: time="2026-04-16T04:02:28.739620220Z" level=info msg="received container exit event container_id:\"8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563\" id:\"8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563\" pid:3979 exit_status:1 exited_at:{seconds:1776312148 nanos:572931800}" Apr 16 04:02:29.638694 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90-rootfs.mount: Deactivated successfully. Apr 16 04:02:30.447698 kubelet[2980]: E0416 04:02:30.370041 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:02:31.979174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb-rootfs.mount: Deactivated successfully. Apr 16 04:02:32.063992 kubelet[2980]: E0416 04:02:32.031080 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.317s" Apr 16 04:02:32.195015 kubelet[2980]: E0416 04:02:32.189518 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:02:32.282052 kubelet[2980]: I0416 04:02:32.280497 2980 scope.go:122] "RemoveContainer" containerID="7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8" Apr 16 04:02:32.282052 kubelet[2980]: I0416 04:02:32.281063 2980 scope.go:122] "RemoveContainer" containerID="7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90" Apr 16 04:02:32.282052 kubelet[2980]: E0416 04:02:32.281286 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-mwc4j_tigera-operator(1fd5a14c-9f90-43e3-abf1-9685462b990b)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-mwc4j" podUID="1fd5a14c-9f90-43e3-abf1-9685462b990b" Apr 16 04:02:32.611578 containerd[1575]: time="2026-04-16T04:02:32.609131494Z" level=info msg="RemoveContainer for \"7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8\"" Apr 16 04:02:32.857460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563-rootfs.mount: Deactivated successfully. Apr 16 04:02:32.913765 containerd[1575]: time="2026-04-16T04:02:32.875070642Z" level=info msg="RemoveContainer for \"7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8\" returns successfully" Apr 16 04:02:33.024977 sshd[4492]: Connection closed by 10.0.0.1 port 54726 Apr 16 04:02:33.186810 sshd-session[4477]: pam_unix(sshd:session): session closed for user core Apr 16 04:02:33.496654 systemd[1]: sshd@31-10.0.0.115:22-10.0.0.1:54726.service: Deactivated successfully. Apr 16 04:02:33.523942 systemd[1]: sshd@31-10.0.0.115:22-10.0.0.1:54726.service: Consumed 2.278s CPU time, 3.7M memory peak. Apr 16 04:02:33.548978 kubelet[2980]: I0416 04:02:33.548813 2980 scope.go:122] "RemoveContainer" containerID="c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd" Apr 16 04:02:33.549897 systemd[1]: session-32.scope: Deactivated successfully. Apr 16 04:02:33.550563 systemd[1]: session-32.scope: Consumed 1.816s CPU time, 17.6M memory peak. Apr 16 04:02:33.605111 kubelet[2980]: I0416 04:02:33.582902 2980 scope.go:122] "RemoveContainer" containerID="8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563" Apr 16 04:02:33.609029 systemd-logind[1549]: Session 32 logged out. Waiting for processes to exit. Apr 16 04:02:33.803159 systemd-logind[1549]: Removed session 32. Apr 16 04:02:33.826823 kubelet[2980]: E0416 04:02:33.646276 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:02:33.826823 kubelet[2980]: E0416 04:02:33.699829 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 16 04:02:34.271345 containerd[1575]: time="2026-04-16T04:02:34.257747640Z" level=info msg="RemoveContainer for \"c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd\"" Apr 16 04:02:34.885961 containerd[1575]: time="2026-04-16T04:02:34.883705654Z" level=info msg="RemoveContainer for \"c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd\" returns successfully" Apr 16 04:02:35.517487 kubelet[2980]: I0416 04:02:35.515420 2980 scope.go:122] "RemoveContainer" containerID="7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691" Apr 16 04:02:35.909204 kubelet[2980]: E0416 04:02:35.888279 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:02:36.112063 kubelet[2980]: I0416 04:02:36.106858 2980 scope.go:122] "RemoveContainer" containerID="3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb" Apr 16 04:02:36.141978 kubelet[2980]: E0416 04:02:36.138672 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:02:36.306289 kubelet[2980]: E0416 04:02:36.197070 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 04:02:36.342933 containerd[1575]: time="2026-04-16T04:02:36.335269164Z" level=info msg="RemoveContainer for \"7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691\"" Apr 16 04:02:36.475126 containerd[1575]: time="2026-04-16T04:02:36.474491733Z" level=info msg="RemoveContainer for \"7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691\" returns successfully" Apr 16 04:02:38.885749 systemd[1]: Started sshd@32-10.0.0.115:22-10.0.0.1:34104.service - OpenSSH per-connection server daemon (10.0.0.1:34104). Apr 16 04:02:39.489471 kubelet[2980]: I0416 04:02:39.482725 2980 scope.go:122] "RemoveContainer" containerID="3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb" Apr 16 04:02:39.489471 kubelet[2980]: E0416 04:02:39.483265 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:02:39.513192 kubelet[2980]: E0416 04:02:39.506693 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 04:02:40.102471 kubelet[2980]: I0416 04:02:40.093644 2980 scope.go:122] "RemoveContainer" containerID="8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563" Apr 16 04:02:40.102471 kubelet[2980]: E0416 04:02:40.099009 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:02:40.102471 kubelet[2980]: E0416 04:02:40.100014 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 16 04:02:40.111584 sshd[4539]: Accepted publickey for core from 10.0.0.1 port 34104 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:02:40.125661 sshd-session[4539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:02:40.195274 systemd-logind[1549]: New session 33 of user core. Apr 16 04:02:40.221786 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 16 04:02:40.900644 kubelet[2980]: E0416 04:02:40.899187 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:02:41.320468 sshd[4544]: Connection closed by 10.0.0.1 port 34104 Apr 16 04:02:41.321891 sshd-session[4539]: pam_unix(sshd:session): session closed for user core Apr 16 04:02:41.456406 systemd[1]: sshd@32-10.0.0.115:22-10.0.0.1:34104.service: Deactivated successfully. Apr 16 04:02:41.463580 systemd[1]: session-33.scope: Deactivated successfully. Apr 16 04:02:41.468115 systemd-logind[1549]: Session 33 logged out. Waiting for processes to exit. Apr 16 04:02:41.543590 systemd-logind[1549]: Removed session 33. Apr 16 04:02:45.914238 kubelet[2980]: E0416 04:02:45.911191 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:02:46.620861 systemd[1]: Started sshd@33-10.0.0.115:22-10.0.0.1:60844.service - OpenSSH per-connection server daemon (10.0.0.1:60844). Apr 16 04:02:47.338242 sshd[4559]: Accepted publickey for core from 10.0.0.1 port 60844 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:02:47.341795 sshd-session[4559]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:02:47.358749 systemd-logind[1549]: New session 34 of user core. Apr 16 04:02:47.380009 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 16 04:02:48.675026 sshd[4562]: Connection closed by 10.0.0.1 port 60844 Apr 16 04:02:48.679486 sshd-session[4559]: pam_unix(sshd:session): session closed for user core Apr 16 04:02:48.735778 systemd[1]: sshd@33-10.0.0.115:22-10.0.0.1:60844.service: Deactivated successfully. Apr 16 04:02:48.774908 systemd[1]: session-34.scope: Deactivated successfully. Apr 16 04:02:48.782187 systemd-logind[1549]: Session 34 logged out. Waiting for processes to exit. Apr 16 04:02:48.827719 systemd-logind[1549]: Removed session 34. Apr 16 04:02:50.952743 kubelet[2980]: E0416 04:02:50.950322 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:02:51.610646 kubelet[2980]: I0416 04:02:51.598423 2980 scope.go:122] "RemoveContainer" containerID="8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563" Apr 16 04:02:51.653004 kubelet[2980]: E0416 04:02:51.648666 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:02:51.816816 containerd[1575]: time="2026-04-16T04:02:51.816323390Z" level=info msg="CreateContainer within sandbox \"b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:3,}" Apr 16 04:02:52.048468 containerd[1575]: time="2026-04-16T04:02:52.045501178Z" level=info msg="Container 461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:02:52.113810 containerd[1575]: time="2026-04-16T04:02:52.112290448Z" level=info msg="CreateContainer within sandbox \"b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:3,} returns container id \"461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142\"" Apr 16 04:02:52.154928 containerd[1575]: time="2026-04-16T04:02:52.126672357Z" level=info msg="StartContainer for \"461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142\"" Apr 16 04:02:52.195999 containerd[1575]: time="2026-04-16T04:02:52.195292865Z" level=info msg="connecting to shim 461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142" address="unix:///run/containerd/s/64fc1d346c666396b4a6f4eda52f8f58d8abeacdc8da519fac54d1b45f3029a3" protocol=ttrpc version=3 Apr 16 04:02:52.365542 systemd[1]: Started cri-containerd-461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142.scope - libcontainer container 461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142. Apr 16 04:02:52.826183 containerd[1575]: time="2026-04-16T04:02:52.825834467Z" level=info msg="StartContainer for \"461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142\" returns successfully" Apr 16 04:02:53.658949 containerd[1575]: time="2026-04-16T04:02:53.657989836Z" level=warning msg="container event discarded" container=35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9 type=CONTAINER_STOPPED_EVENT Apr 16 04:02:53.773229 systemd[1]: Started sshd@34-10.0.0.115:22-10.0.0.1:60850.service - OpenSSH per-connection server daemon (10.0.0.1:60850). Apr 16 04:02:53.841256 kubelet[2980]: E0416 04:02:53.841149 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:02:53.851690 containerd[1575]: time="2026-04-16T04:02:53.851508077Z" level=warning msg="container event discarded" container=6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf type=CONTAINER_STOPPED_EVENT Apr 16 04:02:54.652701 sshd[4614]: Accepted publickey for core from 10.0.0.1 port 60850 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:02:54.656543 sshd-session[4614]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:02:54.709731 systemd-logind[1549]: New session 35 of user core. Apr 16 04:02:54.794585 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 16 04:02:54.959188 kubelet[2980]: E0416 04:02:54.886981 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:02:55.602619 kubelet[2980]: E0416 04:02:55.602017 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:02:55.879840 kubelet[2980]: E0416 04:02:55.878140 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:02:55.954905 kubelet[2980]: E0416 04:02:55.954493 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:02:56.294435 sshd[4617]: Connection closed by 10.0.0.1 port 60850 Apr 16 04:02:56.311150 sshd-session[4614]: pam_unix(sshd:session): session closed for user core Apr 16 04:02:56.507640 systemd[1]: sshd@34-10.0.0.115:22-10.0.0.1:60850.service: Deactivated successfully. Apr 16 04:02:56.550052 systemd[1]: session-35.scope: Deactivated successfully. Apr 16 04:02:56.565003 systemd-logind[1549]: Session 35 logged out. Waiting for processes to exit. Apr 16 04:02:56.588670 systemd-logind[1549]: Removed session 35. Apr 16 04:03:01.015172 kubelet[2980]: E0416 04:03:01.014485 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:03:01.211079 containerd[1575]: time="2026-04-16T04:03:01.202299522Z" level=warning msg="container event discarded" container=a387b738441587bca4e5b24f5d482b3146dc00d3510f3fc3f7bef3beb8dfca63 type=CONTAINER_DELETED_EVENT Apr 16 04:03:01.854430 systemd[1]: Started sshd@35-10.0.0.115:22-10.0.0.1:54160.service - OpenSSH per-connection server daemon (10.0.0.1:54160). Apr 16 04:03:05.333048 containerd[1575]: time="2026-04-16T04:03:05.319734360Z" level=warning msg="container event discarded" container=c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd type=CONTAINER_STOPPED_EVENT Apr 16 04:03:06.240928 kubelet[2980]: E0416 04:03:06.236769 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:03:06.928784 kubelet[2980]: E0416 04:03:06.558926 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:03:07.293278 containerd[1575]: time="2026-04-16T04:03:07.185837635Z" level=warning msg="container event discarded" container=3e38c63f7892d2c3b23f707539dc17c68b584fda6e21a4f5196f9f4eb18b8fc4 type=CONTAINER_DELETED_EVENT Apr 16 04:03:07.952933 kubelet[2980]: E0416 04:03:07.946724 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.356s" Apr 16 04:03:09.518877 kubelet[2980]: E0416 04:03:09.505242 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:03:10.118384 kubelet[2980]: E0416 04:03:10.113390 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.513s" Apr 16 04:03:11.687472 kubelet[2980]: E0416 04:03:11.685617 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:03:12.326005 kubelet[2980]: E0416 04:03:12.179795 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.552s" Apr 16 04:03:14.313044 sshd[4632]: Accepted publickey for core from 10.0.0.1 port 54160 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:03:15.658238 containerd[1575]: time="2026-04-16T04:03:15.655550245Z" level=warning msg="container event discarded" container=bb4bbdb22f366a968271ebc48a0b7e5f451a62c12d047ea7dd03f49e10e9a2ad type=CONTAINER_DELETED_EVENT Apr 16 04:03:15.670412 sshd-session[4632]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:03:16.560350 containerd[1575]: time="2026-04-16T04:03:16.554011989Z" level=warning msg="container event discarded" container=7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691 type=CONTAINER_CREATED_EVENT Apr 16 04:03:16.668977 containerd[1575]: time="2026-04-16T04:03:16.657772440Z" level=warning msg="container event discarded" container=7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8 type=CONTAINER_CREATED_EVENT Apr 16 04:03:17.167388 kubelet[2980]: E0416 04:03:17.135169 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:03:17.778048 containerd[1575]: time="2026-04-16T04:03:17.142176871Z" level=warning msg="container event discarded" container=8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563 type=CONTAINER_CREATED_EVENT Apr 16 04:03:17.897753 kubelet[2980]: E0416 04:03:17.235824 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.572s" Apr 16 04:03:17.839021 systemd-logind[1549]: New session 36 of user core. Apr 16 04:03:18.351312 systemd[1]: Started session-36.scope - Session 36 of User core. Apr 16 04:03:19.779321 containerd[1575]: time="2026-04-16T04:03:19.707533441Z" level=warning msg="container event discarded" container=7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691 type=CONTAINER_STARTED_EVENT Apr 16 04:03:20.529165 containerd[1575]: time="2026-04-16T04:03:20.527167876Z" level=warning msg="container event discarded" container=7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8 type=CONTAINER_STARTED_EVENT Apr 16 04:03:20.569840 containerd[1575]: time="2026-04-16T04:03:20.538976872Z" level=warning msg="container event discarded" container=8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563 type=CONTAINER_STARTED_EVENT Apr 16 04:03:21.265782 kubelet[2980]: E0416 04:03:21.258326 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.02s" Apr 16 04:03:22.312420 kubelet[2980]: E0416 04:03:22.198401 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:03:25.908644 sshd[4638]: Connection closed by 10.0.0.1 port 54160 Apr 16 04:03:25.941498 sshd-session[4632]: pam_unix(sshd:session): session closed for user core Apr 16 04:03:25.983569 systemd[1]: sshd@35-10.0.0.115:22-10.0.0.1:54160.service: Deactivated successfully. Apr 16 04:03:25.984222 systemd[1]: sshd@35-10.0.0.115:22-10.0.0.1:54160.service: Consumed 3.142s CPU time, 3.2M memory peak. Apr 16 04:03:26.080464 systemd[1]: session-36.scope: Deactivated successfully. Apr 16 04:03:26.088347 systemd[1]: session-36.scope: Consumed 2.269s CPU time, 15.4M memory peak. Apr 16 04:03:26.092214 systemd-logind[1549]: Session 36 logged out. Waiting for processes to exit. Apr 16 04:03:26.437763 systemd-logind[1549]: Removed session 36. Apr 16 04:03:27.856370 kubelet[2980]: E0416 04:03:27.706497 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:03:30.783777 kubelet[2980]: E0416 04:03:30.783192 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.213s" Apr 16 04:03:32.174554 systemd[1]: Started sshd@36-10.0.0.115:22-10.0.0.1:32876.service - OpenSSH per-connection server daemon (10.0.0.1:32876). Apr 16 04:03:33.457865 kubelet[2980]: E0416 04:03:33.453721 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:03:35.273031 kubelet[2980]: E0416 04:03:35.263024 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.479s" Apr 16 04:03:36.584921 kubelet[2980]: E0416 04:03:36.573399 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:03:38.131573 kubelet[2980]: E0416 04:03:38.120718 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.843s" Apr 16 04:03:38.661243 kubelet[2980]: E0416 04:03:38.648798 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:03:39.876051 kubelet[2980]: E0416 04:03:39.855060 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.658s" Apr 16 04:03:40.377587 sshd[4658]: Accepted publickey for core from 10.0.0.1 port 32876 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:03:40.461764 sshd-session[4658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:03:41.655072 systemd-logind[1549]: New session 37 of user core. Apr 16 04:03:42.392482 systemd[1]: Started session-37.scope - Session 37 of User core. Apr 16 04:03:43.957110 kubelet[2980]: E0416 04:03:43.951006 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:03:49.262708 kubelet[2980]: E0416 04:03:49.232582 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:03:50.974052 kubelet[2980]: E0416 04:03:50.946480 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.327s" Apr 16 04:03:53.087026 kubelet[2980]: E0416 04:03:53.081668 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.125s" Apr 16 04:03:54.573565 kubelet[2980]: E0416 04:03:54.571534 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.489s" Apr 16 04:03:55.254740 kubelet[2980]: E0416 04:03:55.254430 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:03:55.588531 sshd[4661]: Connection closed by 10.0.0.1 port 32876 Apr 16 04:03:55.468867 sshd-session[4658]: pam_unix(sshd:session): session closed for user core Apr 16 04:03:56.442764 kubelet[2980]: E0416 04:03:56.256825 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.672s" Apr 16 04:03:56.678900 systemd[1]: sshd@36-10.0.0.115:22-10.0.0.1:32876.service: Deactivated successfully. Apr 16 04:03:56.713366 systemd[1]: sshd@36-10.0.0.115:22-10.0.0.1:32876.service: Consumed 2.935s CPU time, 3.5M memory peak. Apr 16 04:03:56.942922 systemd[1]: session-37.scope: Deactivated successfully. Apr 16 04:03:57.050356 systemd[1]: session-37.scope: Consumed 5.969s CPU time, 15.4M memory peak. Apr 16 04:03:57.240131 systemd-logind[1549]: Session 37 logged out. Waiting for processes to exit. Apr 16 04:03:57.556067 systemd-logind[1549]: Removed session 37. Apr 16 04:03:57.915951 kubelet[2980]: I0416 04:03:57.853548 2980 scope.go:122] "RemoveContainer" containerID="3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb" Apr 16 04:03:58.127881 kubelet[2980]: E0416 04:03:58.124530 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:03:58.888826 containerd[1575]: time="2026-04-16T04:03:58.886897034Z" level=info msg="CreateContainer within sandbox \"02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:5,}" Apr 16 04:04:01.050381 kubelet[2980]: E0416 04:04:01.039322 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:04:01.758453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3321141347.mount: Deactivated successfully. Apr 16 04:04:02.078795 systemd[1]: Started sshd@37-10.0.0.115:22-10.0.0.1:49568.service - OpenSSH per-connection server daemon (10.0.0.1:49568). Apr 16 04:04:02.697570 containerd[1575]: time="2026-04-16T04:04:02.681057292Z" level=info msg="Container 83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:04:02.683523 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount19879604.mount: Deactivated successfully. Apr 16 04:04:03.193738 kubelet[2980]: E0416 04:04:03.193703 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.58s" Apr 16 04:04:03.354200 kubelet[2980]: I0416 04:04:03.353901 2980 scope.go:122] "RemoveContainer" containerID="7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90" Apr 16 04:04:03.578772 containerd[1575]: time="2026-04-16T04:04:03.560782488Z" level=info msg="CreateContainer within sandbox \"02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:5,} returns container id \"83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929\"" Apr 16 04:04:03.792718 containerd[1575]: time="2026-04-16T04:04:03.785592589Z" level=info msg="StartContainer for \"83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929\"" Apr 16 04:04:04.082024 containerd[1575]: time="2026-04-16T04:04:04.075709343Z" level=info msg="connecting to shim 83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929" address="unix:///run/containerd/s/b0f2c5cfffdebf676e7ed85c3328df6a87775c2b04620a5f0b47a494ee449f34" protocol=ttrpc version=3 Apr 16 04:04:04.400839 containerd[1575]: time="2026-04-16T04:04:04.364248324Z" level=info msg="CreateContainer within sandbox \"c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db\" for container &ContainerMetadata{Name:tigera-operator,Attempt:5,}" Apr 16 04:04:05.973436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3333870796.mount: Deactivated successfully. Apr 16 04:04:06.115758 containerd[1575]: time="2026-04-16T04:04:06.106050747Z" level=info msg="Container d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:04:06.233661 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3560201345.mount: Deactivated successfully. Apr 16 04:04:06.346779 kubelet[2980]: E0416 04:04:06.340558 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:04:06.648560 containerd[1575]: time="2026-04-16T04:04:06.638019839Z" level=info msg="CreateContainer within sandbox \"c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db\" for &ContainerMetadata{Name:tigera-operator,Attempt:5,} returns container id \"d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b\"" Apr 16 04:04:06.782541 containerd[1575]: time="2026-04-16T04:04:06.778704697Z" level=info msg="StartContainer for \"d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b\"" Apr 16 04:04:06.836182 sshd[4681]: Accepted publickey for core from 10.0.0.1 port 49568 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:04:07.260784 containerd[1575]: time="2026-04-16T04:04:07.258911891Z" level=info msg="connecting to shim d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b" address="unix:///run/containerd/s/b40817c4c3b3e5498badbc035a393ccaaa43aaaa06e8111d2e4d4485037a2b06" protocol=ttrpc version=3 Apr 16 04:04:07.272122 sshd-session[4681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:04:07.728397 systemd[1]: Started cri-containerd-83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929.scope - libcontainer container 83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929. Apr 16 04:04:08.331902 systemd-logind[1549]: New session 38 of user core. Apr 16 04:04:08.389274 systemd[1]: Started session-38.scope - Session 38 of User core. Apr 16 04:04:09.749976 kubelet[2980]: E0416 04:04:09.671393 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.103s" Apr 16 04:04:10.340797 containerd[1575]: time="2026-04-16T04:04:10.340638607Z" level=error msg="get state for 83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929" error="context deadline exceeded" Apr 16 04:04:10.528779 containerd[1575]: time="2026-04-16T04:04:10.343290554Z" level=warning msg="unknown status" status=0 Apr 16 04:04:11.585543 systemd[1]: Started cri-containerd-d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b.scope - libcontainer container d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b. Apr 16 04:04:11.978739 kubelet[2980]: E0416 04:04:11.958018 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.215s" Apr 16 04:04:12.043554 containerd[1575]: time="2026-04-16T04:04:11.990548816Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 04:04:12.170547 kubelet[2980]: E0416 04:04:12.067710 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:04:12.477499 kubelet[2980]: E0416 04:04:12.477458 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:04:13.892702 containerd[1575]: time="2026-04-16T04:04:13.802831815Z" level=info msg="StartContainer for \"83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929\" returns successfully" Apr 16 04:04:13.938924 kubelet[2980]: E0416 04:04:13.931185 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.05s" Apr 16 04:04:15.163991 kubelet[2980]: E0416 04:04:15.159953 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:04:15.398256 sshd[4711]: Connection closed by 10.0.0.1 port 49568 Apr 16 04:04:15.575380 containerd[1575]: time="2026-04-16T04:04:15.397739341Z" level=error msg="get state for d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b" error="context deadline exceeded" Apr 16 04:04:15.575380 containerd[1575]: time="2026-04-16T04:04:15.398030622Z" level=warning msg="unknown status" status=0 Apr 16 04:04:15.488539 sshd-session[4681]: pam_unix(sshd:session): session closed for user core Apr 16 04:04:15.764840 systemd[1]: sshd@37-10.0.0.115:22-10.0.0.1:49568.service: Deactivated successfully. Apr 16 04:04:15.767161 systemd[1]: sshd@37-10.0.0.115:22-10.0.0.1:49568.service: Consumed 1.657s CPU time, 3.2M memory peak. Apr 16 04:04:15.960336 systemd[1]: session-38.scope: Deactivated successfully. Apr 16 04:04:16.098328 systemd[1]: session-38.scope: Consumed 2.956s CPU time, 15M memory peak. Apr 16 04:04:16.317116 systemd-logind[1549]: Session 38 logged out. Waiting for processes to exit. Apr 16 04:04:16.655707 systemd-logind[1549]: Removed session 38. Apr 16 04:04:17.536066 kubelet[2980]: E0416 04:04:17.526249 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:04:17.680065 containerd[1575]: time="2026-04-16T04:04:17.674489848Z" level=error msg="get state for d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b" error="context deadline exceeded" Apr 16 04:04:17.680065 containerd[1575]: time="2026-04-16T04:04:17.674710884Z" level=warning msg="unknown status" status=0 Apr 16 04:04:18.371534 containerd[1575]: time="2026-04-16T04:04:18.370983643Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 04:04:18.371534 containerd[1575]: time="2026-04-16T04:04:18.371318292Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 16 04:04:19.745919 containerd[1575]: time="2026-04-16T04:04:19.745695981Z" level=info msg="StartContainer for \"d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b\" returns successfully" Apr 16 04:04:20.604702 systemd[1]: Started sshd@38-10.0.0.115:22-10.0.0.1:50718.service - OpenSSH per-connection server daemon (10.0.0.1:50718). Apr 16 04:04:21.057187 kubelet[2980]: E0416 04:04:21.057013 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:04:22.669370 kubelet[2980]: E0416 04:04:22.669184 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:04:23.690122 sshd[4768]: Accepted publickey for core from 10.0.0.1 port 50718 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:04:23.933172 sshd-session[4768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:04:24.289892 systemd-logind[1549]: New session 39 of user core. Apr 16 04:04:24.369708 systemd[1]: Started session-39.scope - Session 39 of User core. Apr 16 04:04:27.816602 kubelet[2980]: E0416 04:04:27.813289 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:04:28.586165 sshd[4774]: Connection closed by 10.0.0.1 port 50718 Apr 16 04:04:28.757889 sshd-session[4768]: pam_unix(sshd:session): session closed for user core Apr 16 04:04:29.084300 systemd[1]: sshd@38-10.0.0.115:22-10.0.0.1:50718.service: Deactivated successfully. Apr 16 04:04:29.216349 systemd[1]: sshd@38-10.0.0.115:22-10.0.0.1:50718.service: Consumed 1.511s CPU time, 3.5M memory peak. Apr 16 04:04:29.672368 systemd[1]: session-39.scope: Deactivated successfully. Apr 16 04:04:29.736572 systemd[1]: session-39.scope: Consumed 2.205s CPU time, 15.7M memory peak. Apr 16 04:04:30.039325 systemd-logind[1549]: Session 39 logged out. Waiting for processes to exit. Apr 16 04:04:30.334942 systemd-logind[1549]: Removed session 39. Apr 16 04:04:30.751889 kubelet[2980]: E0416 04:04:30.744930 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:04:31.570623 kubelet[2980]: E0416 04:04:31.569934 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:04:32.834124 kubelet[2980]: E0416 04:04:32.826992 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:04:33.839419 systemd[1]: Started sshd@39-10.0.0.115:22-10.0.0.1:49014.service - OpenSSH per-connection server daemon (10.0.0.1:49014). Apr 16 04:04:34.687815 sshd[4793]: Accepted publickey for core from 10.0.0.1 port 49014 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:04:34.703463 sshd-session[4793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:04:34.745990 systemd-logind[1549]: New session 40 of user core. Apr 16 04:04:34.777067 systemd[1]: Started session-40.scope - Session 40 of User core. Apr 16 04:04:37.501916 sshd[4796]: Connection closed by 10.0.0.1 port 49014 Apr 16 04:04:37.533803 sshd-session[4793]: pam_unix(sshd:session): session closed for user core Apr 16 04:04:38.065586 kubelet[2980]: E0416 04:04:38.064921 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:04:38.065674 systemd[1]: sshd@39-10.0.0.115:22-10.0.0.1:49014.service: Deactivated successfully. Apr 16 04:04:38.427174 systemd[1]: session-40.scope: Deactivated successfully. Apr 16 04:04:38.481843 systemd[1]: session-40.scope: Consumed 1.481s CPU time, 15.5M memory peak. Apr 16 04:04:38.625244 systemd-logind[1549]: Session 40 logged out. Waiting for processes to exit. Apr 16 04:04:38.840687 systemd-logind[1549]: Removed session 40. Apr 16 04:04:43.068572 systemd[1]: Started sshd@40-10.0.0.115:22-10.0.0.1:41476.service - OpenSSH per-connection server daemon (10.0.0.1:41476). Apr 16 04:04:43.135854 kubelet[2980]: E0416 04:04:43.086926 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:04:44.560066 sshd[4811]: Accepted publickey for core from 10.0.0.1 port 41476 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:04:44.644577 sshd-session[4811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:04:44.858029 systemd-logind[1549]: New session 41 of user core. Apr 16 04:04:44.887152 systemd[1]: Started session-41.scope - Session 41 of User core. Apr 16 04:04:46.410004 sshd[4815]: Connection closed by 10.0.0.1 port 41476 Apr 16 04:04:46.519262 sshd-session[4811]: pam_unix(sshd:session): session closed for user core Apr 16 04:04:46.632015 systemd[1]: sshd@40-10.0.0.115:22-10.0.0.1:41476.service: Deactivated successfully. Apr 16 04:04:46.690855 systemd[1]: session-41.scope: Deactivated successfully. Apr 16 04:04:46.721868 systemd-logind[1549]: Session 41 logged out. Waiting for processes to exit. Apr 16 04:04:46.786339 systemd-logind[1549]: Removed session 41. Apr 16 04:04:48.190245 kubelet[2980]: E0416 04:04:48.187314 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:04:49.572379 kubelet[2980]: E0416 04:04:49.571970 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:04:51.656740 systemd[1]: Started sshd@41-10.0.0.115:22-10.0.0.1:45550.service - OpenSSH per-connection server daemon (10.0.0.1:45550). Apr 16 04:04:52.772733 sshd[4834]: Accepted publickey for core from 10.0.0.1 port 45550 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:04:52.820002 sshd-session[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:04:52.851476 systemd-logind[1549]: New session 42 of user core. Apr 16 04:04:52.905972 systemd[1]: Started session-42.scope - Session 42 of User core. Apr 16 04:04:53.224543 kubelet[2980]: E0416 04:04:53.223690 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:04:55.942348 sshd[4837]: Connection closed by 10.0.0.1 port 45550 Apr 16 04:04:55.980385 sshd-session[4834]: pam_unix(sshd:session): session closed for user core Apr 16 04:04:56.045751 systemd[1]: sshd@41-10.0.0.115:22-10.0.0.1:45550.service: Deactivated successfully. Apr 16 04:04:56.210949 systemd[1]: session-42.scope: Deactivated successfully. Apr 16 04:04:56.220460 systemd-logind[1549]: Session 42 logged out. Waiting for processes to exit. Apr 16 04:04:56.223798 systemd-logind[1549]: Removed session 42. Apr 16 04:04:57.784268 containerd[1575]: time="2026-04-16T04:04:57.766512966Z" level=warning msg="container event discarded" container=7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8 type=CONTAINER_STOPPED_EVENT Apr 16 04:04:58.246401 kubelet[2980]: E0416 04:04:58.245857 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:05:01.272496 systemd[1]: Started sshd@42-10.0.0.115:22-10.0.0.1:47682.service - OpenSSH per-connection server daemon (10.0.0.1:47682). Apr 16 04:05:01.408807 containerd[1575]: time="2026-04-16T04:05:01.403893661Z" level=warning msg="container event discarded" container=7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691 type=CONTAINER_STOPPED_EVENT Apr 16 04:05:01.423727 containerd[1575]: time="2026-04-16T04:05:01.423365410Z" level=warning msg="container event discarded" container=6aca8cbe2bcd0959441fae5bcf349eeb38016d350767f540c3c567c70f140acf type=CONTAINER_DELETED_EVENT Apr 16 04:05:02.281936 sshd[4853]: Accepted publickey for core from 10.0.0.1 port 47682 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:05:02.284887 sshd-session[4853]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:05:02.382877 systemd-logind[1549]: New session 43 of user core. Apr 16 04:05:02.500164 systemd[1]: Started session-43.scope - Session 43 of User core. Apr 16 04:05:03.307074 kubelet[2980]: E0416 04:05:03.306502 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:05:04.055216 sshd[4857]: Connection closed by 10.0.0.1 port 47682 Apr 16 04:05:04.057348 sshd-session[4853]: pam_unix(sshd:session): session closed for user core Apr 16 04:05:04.115446 systemd[1]: sshd@42-10.0.0.115:22-10.0.0.1:47682.service: Deactivated successfully. Apr 16 04:05:04.137809 systemd[1]: session-43.scope: Deactivated successfully. Apr 16 04:05:04.171225 systemd-logind[1549]: Session 43 logged out. Waiting for processes to exit. Apr 16 04:05:04.206149 systemd-logind[1549]: Removed session 43. Apr 16 04:05:04.219524 systemd[1]: Started sshd@43-10.0.0.115:22-10.0.0.1:47692.service - OpenSSH per-connection server daemon (10.0.0.1:47692). Apr 16 04:05:04.780066 containerd[1575]: time="2026-04-16T04:05:04.779407131Z" level=warning msg="container event discarded" container=35528f009887eca63263975f2e9c95ed4fb1449e0a877dca51bf23857d9e79a9 type=CONTAINER_DELETED_EVENT Apr 16 04:05:05.028287 sshd[4872]: Accepted publickey for core from 10.0.0.1 port 47692 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:05:05.077861 sshd-session[4872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:05:05.475637 systemd-logind[1549]: New session 44 of user core. Apr 16 04:05:05.558871 systemd[1]: Started session-44.scope - Session 44 of User core. Apr 16 04:05:08.356316 kubelet[2980]: E0416 04:05:08.355851 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:05:08.720869 sshd[4875]: Connection closed by 10.0.0.1 port 47692 Apr 16 04:05:08.722935 sshd-session[4872]: pam_unix(sshd:session): session closed for user core Apr 16 04:05:08.781574 systemd[1]: sshd@43-10.0.0.115:22-10.0.0.1:47692.service: Deactivated successfully. Apr 16 04:05:08.814758 systemd[1]: session-44.scope: Deactivated successfully. Apr 16 04:05:08.815468 systemd[1]: session-44.scope: Consumed 1.174s CPU time, 23.8M memory peak. Apr 16 04:05:08.830132 systemd-logind[1549]: Session 44 logged out. Waiting for processes to exit. Apr 16 04:05:08.841669 systemd[1]: Started sshd@44-10.0.0.115:22-10.0.0.1:53736.service - OpenSSH per-connection server daemon (10.0.0.1:53736). Apr 16 04:05:08.854275 systemd-logind[1549]: Removed session 44. Apr 16 04:05:10.434392 sshd[4887]: Accepted publickey for core from 10.0.0.1 port 53736 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:05:10.449741 sshd-session[4887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:05:10.511629 systemd-logind[1549]: New session 45 of user core. Apr 16 04:05:10.557362 systemd[1]: Started session-45.scope - Session 45 of User core. Apr 16 04:05:11.883323 sshd[4890]: Connection closed by 10.0.0.1 port 53736 Apr 16 04:05:11.891289 sshd-session[4887]: pam_unix(sshd:session): session closed for user core Apr 16 04:05:12.074701 systemd[1]: sshd@44-10.0.0.115:22-10.0.0.1:53736.service: Deactivated successfully. Apr 16 04:05:12.174383 systemd[1]: session-45.scope: Deactivated successfully. Apr 16 04:05:12.185483 systemd-logind[1549]: Session 45 logged out. Waiting for processes to exit. Apr 16 04:05:12.191141 systemd-logind[1549]: Removed session 45. Apr 16 04:05:13.375306 kubelet[2980]: E0416 04:05:13.366081 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:05:17.023322 systemd[1]: Started sshd@45-10.0.0.115:22-10.0.0.1:34784.service - OpenSSH per-connection server daemon (10.0.0.1:34784). Apr 16 04:05:17.786662 sshd[4907]: Accepted publickey for core from 10.0.0.1 port 34784 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:05:17.793995 sshd-session[4907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:05:18.187065 systemd-logind[1549]: New session 46 of user core. Apr 16 04:05:18.206496 systemd[1]: Started session-46.scope - Session 46 of User core. Apr 16 04:05:18.568409 kubelet[2980]: E0416 04:05:18.550261 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:05:20.141181 sshd[4910]: Connection closed by 10.0.0.1 port 34784 Apr 16 04:05:20.159660 sshd-session[4907]: pam_unix(sshd:session): session closed for user core Apr 16 04:05:20.358819 systemd[1]: sshd@45-10.0.0.115:22-10.0.0.1:34784.service: Deactivated successfully. Apr 16 04:05:20.382443 systemd[1]: session-46.scope: Deactivated successfully. Apr 16 04:05:20.425757 systemd-logind[1549]: Session 46 logged out. Waiting for processes to exit. Apr 16 04:05:20.529657 systemd-logind[1549]: Removed session 46. Apr 16 04:05:22.590962 kubelet[2980]: E0416 04:05:22.590586 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:05:23.593819 kubelet[2980]: E0416 04:05:23.590612 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:05:25.376947 systemd[1]: Started sshd@46-10.0.0.115:22-10.0.0.1:34794.service - OpenSSH per-connection server daemon (10.0.0.1:34794). Apr 16 04:05:26.573154 sshd[4931]: Accepted publickey for core from 10.0.0.1 port 34794 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:05:26.639836 sshd-session[4931]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:05:26.714978 systemd-logind[1549]: New session 47 of user core. Apr 16 04:05:26.743385 systemd[1]: Started session-47.scope - Session 47 of User core. Apr 16 04:05:28.311543 containerd[1575]: time="2026-04-16T04:05:28.310070917Z" level=warning msg="container event discarded" container=7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90 type=CONTAINER_CREATED_EVENT Apr 16 04:05:28.602878 kubelet[2980]: E0416 04:05:28.601231 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:05:28.908114 sshd[4934]: Connection closed by 10.0.0.1 port 34794 Apr 16 04:05:28.937073 sshd-session[4931]: pam_unix(sshd:session): session closed for user core Apr 16 04:05:29.065490 systemd[1]: sshd@46-10.0.0.115:22-10.0.0.1:34794.service: Deactivated successfully. Apr 16 04:05:29.085702 systemd[1]: session-47.scope: Deactivated successfully. Apr 16 04:05:29.286299 systemd-logind[1549]: Session 47 logged out. Waiting for processes to exit. Apr 16 04:05:29.427590 systemd-logind[1549]: Removed session 47. Apr 16 04:05:32.564882 containerd[1575]: time="2026-04-16T04:05:32.564502089Z" level=warning msg="container event discarded" container=7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90 type=CONTAINER_STARTED_EVENT Apr 16 04:05:32.574326 kubelet[2980]: E0416 04:05:32.574200 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:05:33.669549 kubelet[2980]: E0416 04:05:33.668866 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:05:34.133749 systemd[1]: Started sshd@47-10.0.0.115:22-10.0.0.1:46168.service - OpenSSH per-connection server daemon (10.0.0.1:46168). Apr 16 04:05:35.801182 sshd[4951]: Accepted publickey for core from 10.0.0.1 port 46168 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:05:35.804270 sshd-session[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:05:36.049012 systemd-logind[1549]: New session 48 of user core. Apr 16 04:05:36.088585 systemd[1]: Started session-48.scope - Session 48 of User core. Apr 16 04:05:37.518215 sshd[4954]: Connection closed by 10.0.0.1 port 46168 Apr 16 04:05:37.521792 sshd-session[4951]: pam_unix(sshd:session): session closed for user core Apr 16 04:05:37.571451 systemd-logind[1549]: Session 48 logged out. Waiting for processes to exit. Apr 16 04:05:37.604157 systemd[1]: sshd@47-10.0.0.115:22-10.0.0.1:46168.service: Deactivated successfully. Apr 16 04:05:37.639468 systemd[1]: session-48.scope: Deactivated successfully. Apr 16 04:05:37.665517 systemd-logind[1549]: Removed session 48. Apr 16 04:05:37.992552 containerd[1575]: time="2026-04-16T04:05:37.991652461Z" level=warning msg="container event discarded" container=3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb type=CONTAINER_CREATED_EVENT Apr 16 04:05:38.708909 kubelet[2980]: E0416 04:05:38.708464 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:05:39.280398 containerd[1575]: time="2026-04-16T04:05:39.279393264Z" level=warning msg="container event discarded" container=3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb type=CONTAINER_STARTED_EVENT Apr 16 04:05:42.841900 systemd[1]: Started sshd@48-10.0.0.115:22-10.0.0.1:41436.service - OpenSSH per-connection server daemon (10.0.0.1:41436). Apr 16 04:05:43.715937 kubelet[2980]: E0416 04:05:43.711293 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:05:44.157607 sshd[4970]: Accepted publickey for core from 10.0.0.1 port 41436 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:05:44.182906 sshd-session[4970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:05:44.560223 systemd-logind[1549]: New session 49 of user core. Apr 16 04:05:44.646312 systemd[1]: Started session-49.scope - Session 49 of User core. Apr 16 04:05:47.175697 sshd[4975]: Connection closed by 10.0.0.1 port 41436 Apr 16 04:05:47.174665 sshd-session[4970]: pam_unix(sshd:session): session closed for user core Apr 16 04:05:47.464234 systemd-logind[1549]: Session 49 logged out. Waiting for processes to exit. Apr 16 04:05:47.468711 systemd[1]: sshd@48-10.0.0.115:22-10.0.0.1:41436.service: Deactivated successfully. Apr 16 04:05:47.549390 systemd[1]: session-49.scope: Deactivated successfully. Apr 16 04:05:47.637670 systemd-logind[1549]: Removed session 49. Apr 16 04:05:48.760259 kubelet[2980]: E0416 04:05:48.748988 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:05:52.403879 systemd[1]: Started sshd@49-10.0.0.115:22-10.0.0.1:36734.service - OpenSSH per-connection server daemon (10.0.0.1:36734). Apr 16 04:05:53.282682 sshd[4998]: Accepted publickey for core from 10.0.0.1 port 36734 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:05:53.316534 sshd-session[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:05:53.543001 systemd-logind[1549]: New session 50 of user core. Apr 16 04:05:53.611922 systemd[1]: Started session-50.scope - Session 50 of User core. Apr 16 04:05:53.759623 kubelet[2980]: E0416 04:05:53.758595 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:05:54.938526 sshd[5001]: Connection closed by 10.0.0.1 port 36734 Apr 16 04:05:54.946671 sshd-session[4998]: pam_unix(sshd:session): session closed for user core Apr 16 04:05:54.987293 systemd[1]: sshd@49-10.0.0.115:22-10.0.0.1:36734.service: Deactivated successfully. Apr 16 04:05:55.000878 systemd[1]: session-50.scope: Deactivated successfully. Apr 16 04:05:55.018590 systemd-logind[1549]: Session 50 logged out. Waiting for processes to exit. Apr 16 04:05:55.021571 systemd-logind[1549]: Removed session 50. Apr 16 04:05:58.766582 kubelet[2980]: E0416 04:05:58.763958 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:05:59.647990 kubelet[2980]: E0416 04:05:59.639224 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:05:59.791758 kubelet[2980]: E0416 04:05:59.789278 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:06:00.100505 systemd[1]: Started sshd@50-10.0.0.115:22-10.0.0.1:52046.service - OpenSSH per-connection server daemon (10.0.0.1:52046). Apr 16 04:06:00.332290 kubelet[2980]: I0416 04:06:00.332237 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/89b5fbad-4c87-4aac-9951-121c09bbd556-nodeproc\") pod \"calico-node-kgtx5\" (UID: \"89b5fbad-4c87-4aac-9951-121c09bbd556\") " pod="calico-system/calico-node-kgtx5" Apr 16 04:06:00.333173 kubelet[2980]: I0416 04:06:00.333017 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89b5fbad-4c87-4aac-9951-121c09bbd556-xtables-lock\") pod \"calico-node-kgtx5\" (UID: \"89b5fbad-4c87-4aac-9951-121c09bbd556\") " pod="calico-system/calico-node-kgtx5" Apr 16 04:06:00.333173 kubelet[2980]: I0416 04:06:00.333050 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89b5fbad-4c87-4aac-9951-121c09bbd556-lib-modules\") pod \"calico-node-kgtx5\" (UID: \"89b5fbad-4c87-4aac-9951-121c09bbd556\") " pod="calico-system/calico-node-kgtx5" Apr 16 04:06:00.333173 kubelet[2980]: I0416 04:06:00.333070 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/89b5fbad-4c87-4aac-9951-121c09bbd556-policysync\") pod \"calico-node-kgtx5\" (UID: \"89b5fbad-4c87-4aac-9951-121c09bbd556\") " pod="calico-system/calico-node-kgtx5" Apr 16 04:06:00.333347 kubelet[2980]: I0416 04:06:00.333181 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnx9g\" (UniqueName: \"kubernetes.io/projected/89b5fbad-4c87-4aac-9951-121c09bbd556-kube-api-access-nnx9g\") pod \"calico-node-kgtx5\" (UID: \"89b5fbad-4c87-4aac-9951-121c09bbd556\") " pod="calico-system/calico-node-kgtx5" Apr 16 04:06:00.333887 kubelet[2980]: I0416 04:06:00.333434 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/89b5fbad-4c87-4aac-9951-121c09bbd556-var-lib-calico\") pod \"calico-node-kgtx5\" (UID: \"89b5fbad-4c87-4aac-9951-121c09bbd556\") " pod="calico-system/calico-node-kgtx5" Apr 16 04:06:00.333887 kubelet[2980]: I0416 04:06:00.333473 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/89b5fbad-4c87-4aac-9951-121c09bbd556-node-certs\") pod \"calico-node-kgtx5\" (UID: \"89b5fbad-4c87-4aac-9951-121c09bbd556\") " pod="calico-system/calico-node-kgtx5" Apr 16 04:06:00.333887 kubelet[2980]: I0416 04:06:00.333538 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/89b5fbad-4c87-4aac-9951-121c09bbd556-bpffs\") pod \"calico-node-kgtx5\" (UID: \"89b5fbad-4c87-4aac-9951-121c09bbd556\") " pod="calico-system/calico-node-kgtx5" Apr 16 04:06:00.333887 kubelet[2980]: I0416 04:06:00.333563 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/89b5fbad-4c87-4aac-9951-121c09bbd556-cni-log-dir\") pod \"calico-node-kgtx5\" (UID: \"89b5fbad-4c87-4aac-9951-121c09bbd556\") " pod="calico-system/calico-node-kgtx5" Apr 16 04:06:00.333887 kubelet[2980]: I0416 04:06:00.333626 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89b5fbad-4c87-4aac-9951-121c09bbd556-tigera-ca-bundle\") pod \"calico-node-kgtx5\" (UID: \"89b5fbad-4c87-4aac-9951-121c09bbd556\") " pod="calico-system/calico-node-kgtx5" Apr 16 04:06:00.334076 kubelet[2980]: I0416 04:06:00.333666 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/89b5fbad-4c87-4aac-9951-121c09bbd556-cni-net-dir\") pod \"calico-node-kgtx5\" (UID: \"89b5fbad-4c87-4aac-9951-121c09bbd556\") " pod="calico-system/calico-node-kgtx5" Apr 16 04:06:00.334076 kubelet[2980]: I0416 04:06:00.333730 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/89b5fbad-4c87-4aac-9951-121c09bbd556-var-run-calico\") pod \"calico-node-kgtx5\" (UID: \"89b5fbad-4c87-4aac-9951-121c09bbd556\") " pod="calico-system/calico-node-kgtx5" Apr 16 04:06:00.334076 kubelet[2980]: I0416 04:06:00.333753 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/89b5fbad-4c87-4aac-9951-121c09bbd556-cni-bin-dir\") pod \"calico-node-kgtx5\" (UID: \"89b5fbad-4c87-4aac-9951-121c09bbd556\") " pod="calico-system/calico-node-kgtx5" Apr 16 04:06:00.334076 kubelet[2980]: I0416 04:06:00.333772 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/89b5fbad-4c87-4aac-9951-121c09bbd556-flexvol-driver-host\") pod \"calico-node-kgtx5\" (UID: \"89b5fbad-4c87-4aac-9951-121c09bbd556\") " pod="calico-system/calico-node-kgtx5" Apr 16 04:06:00.334076 kubelet[2980]: I0416 04:06:00.333839 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/89b5fbad-4c87-4aac-9951-121c09bbd556-sys-fs\") pod \"calico-node-kgtx5\" (UID: \"89b5fbad-4c87-4aac-9951-121c09bbd556\") " pod="calico-system/calico-node-kgtx5" Apr 16 04:06:00.649685 systemd[1]: Created slice kubepods-besteffort-pod89b5fbad_4c87_4aac_9951_121c09bbd556.slice - libcontainer container kubepods-besteffort-pod89b5fbad_4c87_4aac_9951_121c09bbd556.slice. Apr 16 04:06:01.010231 kubelet[2980]: E0416 04:06:01.005781 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:01.010231 kubelet[2980]: W0416 04:06:01.006017 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:01.010231 kubelet[2980]: E0416 04:06:01.006176 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:01.010231 kubelet[2980]: E0416 04:06:01.007598 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:01.010231 kubelet[2980]: W0416 04:06:01.007616 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:01.010231 kubelet[2980]: E0416 04:06:01.007636 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:01.010231 kubelet[2980]: E0416 04:06:01.007923 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:01.010231 kubelet[2980]: W0416 04:06:01.007936 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:01.010231 kubelet[2980]: E0416 04:06:01.007949 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:01.033057 kubelet[2980]: E0416 04:06:01.028812 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:01.033057 kubelet[2980]: W0416 04:06:01.028851 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:01.033057 kubelet[2980]: E0416 04:06:01.028880 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:01.111033 kubelet[2980]: E0416 04:06:01.067006 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:01.111033 kubelet[2980]: W0416 04:06:01.067043 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:01.111033 kubelet[2980]: E0416 04:06:01.067080 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:01.111033 kubelet[2980]: E0416 04:06:01.104230 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:01.120036 kubelet[2980]: W0416 04:06:01.104332 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:01.120277 kubelet[2980]: E0416 04:06:01.120116 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:01.231695 kubelet[2980]: E0416 04:06:01.227231 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:01.404358 kubelet[2980]: W0416 04:06:01.371971 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:01.404358 kubelet[2980]: E0416 04:06:01.387966 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:01.441219 kubelet[2980]: E0416 04:06:01.441158 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:01.441219 kubelet[2980]: W0416 04:06:01.441247 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:01.441219 kubelet[2980]: E0416 04:06:01.441282 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:01.444584 kubelet[2980]: E0416 04:06:01.442274 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:01.444584 kubelet[2980]: W0416 04:06:01.442291 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:01.444584 kubelet[2980]: E0416 04:06:01.442309 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:01.444584 kubelet[2980]: E0416 04:06:01.443857 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:01.444584 kubelet[2980]: W0416 04:06:01.443874 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:01.444584 kubelet[2980]: E0416 04:06:01.443891 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:01.617798 kubelet[2980]: E0416 04:06:01.503598 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:01.617798 kubelet[2980]: W0416 04:06:01.614297 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:01.617798 kubelet[2980]: E0416 04:06:01.614505 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:01.985558 kubelet[2980]: E0416 04:06:01.977683 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:02.013545 kubelet[2980]: W0416 04:06:02.008514 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:02.029373 kubelet[2980]: E0416 04:06:02.027560 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:02.062845 kubelet[2980]: E0416 04:06:02.057063 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:02.062845 kubelet[2980]: W0416 04:06:02.057213 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:02.062845 kubelet[2980]: E0416 04:06:02.057417 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:02.133857 kubelet[2980]: E0416 04:06:02.133583 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:02.263665 kubelet[2980]: W0416 04:06:02.143151 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:02.263665 kubelet[2980]: E0416 04:06:02.144378 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:02.263665 kubelet[2980]: E0416 04:06:02.252418 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:02.263665 kubelet[2980]: W0416 04:06:02.252472 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:02.263665 kubelet[2980]: E0416 04:06:02.252662 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:02.291905 kubelet[2980]: E0416 04:06:02.263898 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:02.291905 kubelet[2980]: W0416 04:06:02.263954 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:02.291905 kubelet[2980]: E0416 04:06:02.264072 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:02.291905 kubelet[2980]: E0416 04:06:02.287823 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:02.291905 kubelet[2980]: W0416 04:06:02.288009 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:02.291905 kubelet[2980]: E0416 04:06:02.288154 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:02.313450 kubelet[2980]: E0416 04:06:02.312839 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:02.316323 kubelet[2980]: W0416 04:06:02.312963 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:02.316323 kubelet[2980]: E0416 04:06:02.315442 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:02.327383 kubelet[2980]: E0416 04:06:02.326347 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:02.327383 kubelet[2980]: W0416 04:06:02.326651 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:02.327383 kubelet[2980]: E0416 04:06:02.326861 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:02.402514 kubelet[2980]: E0416 04:06:02.400263 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:02.402514 kubelet[2980]: W0416 04:06:02.400583 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:02.402514 kubelet[2980]: E0416 04:06:02.400819 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:02.511884 kubelet[2980]: E0416 04:06:02.508777 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:02.584874 kubelet[2980]: W0416 04:06:02.534164 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:02.584874 kubelet[2980]: E0416 04:06:02.534555 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:02.682441 kubelet[2980]: E0416 04:06:02.681682 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:02.796883 kubelet[2980]: W0416 04:06:02.704695 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:02.949216 kubelet[2980]: E0416 04:06:02.759967 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:02.965137 kubelet[2980]: E0416 04:06:02.965054 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:02.986625 kubelet[2980]: W0416 04:06:02.966252 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:02.986625 kubelet[2980]: E0416 04:06:02.966302 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:02.989859 sshd[5025]: Accepted publickey for core from 10.0.0.1 port 52046 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:06:02.996910 kubelet[2980]: E0416 04:06:02.996322 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:02.996910 kubelet[2980]: W0416 04:06:02.996462 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:02.996910 kubelet[2980]: E0416 04:06:02.996644 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:02.996910 kubelet[2980]: E0416 04:06:02.997161 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:02.996910 kubelet[2980]: W0416 04:06:02.997174 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:02.996910 kubelet[2980]: E0416 04:06:02.997189 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.033858 kubelet[2980]: E0416 04:06:03.014220 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.033858 kubelet[2980]: W0416 04:06:03.014326 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.051567 kubelet[2980]: E0416 04:06:03.020066 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.057266 sshd-session[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:06:03.062032 kubelet[2980]: E0416 04:06:03.059711 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.301379 kubelet[2980]: W0416 04:06:03.237307 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.301379 kubelet[2980]: E0416 04:06:03.237799 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.378253 kubelet[2980]: E0416 04:06:03.303207 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.378253 kubelet[2980]: W0416 04:06:03.303324 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.378253 kubelet[2980]: E0416 04:06:03.314427 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.378253 kubelet[2980]: E0416 04:06:03.318974 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.378253 kubelet[2980]: W0416 04:06:03.318996 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.378253 kubelet[2980]: E0416 04:06:03.319022 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.378253 kubelet[2980]: E0416 04:06:03.319220 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.378253 kubelet[2980]: W0416 04:06:03.319228 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.378253 kubelet[2980]: E0416 04:06:03.319237 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.378253 kubelet[2980]: E0416 04:06:03.319367 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.456356 kubelet[2980]: W0416 04:06:03.319374 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.456356 kubelet[2980]: E0416 04:06:03.319382 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.456356 kubelet[2980]: E0416 04:06:03.319541 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.456356 kubelet[2980]: W0416 04:06:03.319635 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.456356 kubelet[2980]: E0416 04:06:03.319648 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.456356 kubelet[2980]: E0416 04:06:03.319766 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.456356 kubelet[2980]: W0416 04:06:03.319773 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.456356 kubelet[2980]: E0416 04:06:03.319787 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.456356 kubelet[2980]: E0416 04:06:03.319909 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.456356 kubelet[2980]: W0416 04:06:03.319917 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.521369 kubelet[2980]: E0416 04:06:03.319933 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.521369 kubelet[2980]: E0416 04:06:03.320348 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.521369 kubelet[2980]: W0416 04:06:03.320358 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.521369 kubelet[2980]: E0416 04:06:03.320367 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.521369 kubelet[2980]: E0416 04:06:03.320532 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.521369 kubelet[2980]: W0416 04:06:03.320542 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.521369 kubelet[2980]: E0416 04:06:03.320551 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.521369 kubelet[2980]: E0416 04:06:03.320650 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.521369 kubelet[2980]: W0416 04:06:03.320656 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.521369 kubelet[2980]: E0416 04:06:03.320663 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.742777 kubelet[2980]: E0416 04:06:03.326130 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.742777 kubelet[2980]: W0416 04:06:03.326237 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.742777 kubelet[2980]: E0416 04:06:03.326342 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.742777 kubelet[2980]: E0416 04:06:03.326705 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.742777 kubelet[2980]: W0416 04:06:03.326714 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.742777 kubelet[2980]: E0416 04:06:03.326726 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.742777 kubelet[2980]: E0416 04:06:03.326888 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.742777 kubelet[2980]: W0416 04:06:03.326977 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.742777 kubelet[2980]: E0416 04:06:03.326987 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.742777 kubelet[2980]: E0416 04:06:03.327125 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.780958 containerd[1575]: time="2026-04-16T04:06:03.718007889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kgtx5,Uid:89b5fbad-4c87-4aac-9951-121c09bbd556,Namespace:calico-system,Attempt:0,}" Apr 16 04:06:03.745631 systemd-logind[1549]: New session 51 of user core. Apr 16 04:06:03.813789 kubelet[2980]: W0416 04:06:03.327131 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.813789 kubelet[2980]: E0416 04:06:03.327140 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.813789 kubelet[2980]: E0416 04:06:03.327241 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.813789 kubelet[2980]: W0416 04:06:03.327248 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.813789 kubelet[2980]: E0416 04:06:03.327256 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.813789 kubelet[2980]: E0416 04:06:03.327351 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.813789 kubelet[2980]: W0416 04:06:03.327357 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.813789 kubelet[2980]: E0416 04:06:03.327365 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.813789 kubelet[2980]: E0416 04:06:03.331069 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.813789 kubelet[2980]: W0416 04:06:03.331132 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.799422 systemd[1]: Started session-51.scope - Session 51 of User core. Apr 16 04:06:03.828575 kubelet[2980]: E0416 04:06:03.331155 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.828575 kubelet[2980]: E0416 04:06:03.331491 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.828575 kubelet[2980]: W0416 04:06:03.331504 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.828575 kubelet[2980]: E0416 04:06:03.331520 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.828575 kubelet[2980]: E0416 04:06:03.331666 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.828575 kubelet[2980]: W0416 04:06:03.331673 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.828575 kubelet[2980]: E0416 04:06:03.331682 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.828575 kubelet[2980]: E0416 04:06:03.331862 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.828575 kubelet[2980]: W0416 04:06:03.331870 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.828575 kubelet[2980]: E0416 04:06:03.331881 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.828904 kubelet[2980]: E0416 04:06:03.332138 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.828904 kubelet[2980]: W0416 04:06:03.332148 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.828904 kubelet[2980]: E0416 04:06:03.332159 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.828904 kubelet[2980]: E0416 04:06:03.332282 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.828904 kubelet[2980]: W0416 04:06:03.332289 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.828904 kubelet[2980]: E0416 04:06:03.332299 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.828904 kubelet[2980]: E0416 04:06:03.332416 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.828904 kubelet[2980]: W0416 04:06:03.332422 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.828904 kubelet[2980]: E0416 04:06:03.332435 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.828904 kubelet[2980]: E0416 04:06:03.555174 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.832668 kubelet[2980]: W0416 04:06:03.555208 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.832668 kubelet[2980]: E0416 04:06:03.555319 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.832668 kubelet[2980]: E0416 04:06:03.659720 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.832668 kubelet[2980]: W0416 04:06:03.659980 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.832668 kubelet[2980]: E0416 04:06:03.660135 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.832668 kubelet[2980]: E0416 04:06:03.702803 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.832668 kubelet[2980]: W0416 04:06:03.703341 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.832668 kubelet[2980]: E0416 04:06:03.705566 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.832668 kubelet[2980]: E0416 04:06:03.714459 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.832668 kubelet[2980]: W0416 04:06:03.714588 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.988056 kubelet[2980]: E0416 04:06:03.714631 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.988056 kubelet[2980]: E0416 04:06:03.779590 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.988056 kubelet[2980]: W0416 04:06:03.779686 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.988056 kubelet[2980]: E0416 04:06:03.779741 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.988056 kubelet[2980]: E0416 04:06:03.811372 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.988056 kubelet[2980]: W0416 04:06:03.811430 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.988056 kubelet[2980]: E0416 04:06:03.811544 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:03.988056 kubelet[2980]: E0416 04:06:03.812035 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:03.988056 kubelet[2980]: W0416 04:06:03.812051 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:03.988056 kubelet[2980]: E0416 04:06:03.812070 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.027327 kubelet[2980]: E0416 04:06:03.812366 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.027327 kubelet[2980]: W0416 04:06:03.812376 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.027327 kubelet[2980]: E0416 04:06:03.812387 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.027327 kubelet[2980]: E0416 04:06:03.812545 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.027327 kubelet[2980]: W0416 04:06:03.812627 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.027327 kubelet[2980]: E0416 04:06:03.812639 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.027327 kubelet[2980]: E0416 04:06:03.812853 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.027327 kubelet[2980]: W0416 04:06:03.812861 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.027327 kubelet[2980]: E0416 04:06:03.812870 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.027327 kubelet[2980]: E0416 04:06:03.817428 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.222607 kubelet[2980]: W0416 04:06:03.817448 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.222607 kubelet[2980]: E0416 04:06:03.817617 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.222607 kubelet[2980]: E0416 04:06:03.851880 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.222607 kubelet[2980]: W0416 04:06:03.852071 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.222607 kubelet[2980]: E0416 04:06:03.875486 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.222607 kubelet[2980]: E0416 04:06:03.877483 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.222607 kubelet[2980]: W0416 04:06:03.877508 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.222607 kubelet[2980]: E0416 04:06:03.877613 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.222607 kubelet[2980]: E0416 04:06:03.948015 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.222607 kubelet[2980]: W0416 04:06:03.964626 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.222974 kubelet[2980]: E0416 04:06:03.975209 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.222974 kubelet[2980]: E0416 04:06:03.975059 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:06:04.222974 kubelet[2980]: E0416 04:06:04.011251 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.222974 kubelet[2980]: W0416 04:06:04.011349 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.222974 kubelet[2980]: E0416 04:06:04.032031 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.222974 kubelet[2980]: E0416 04:06:04.169828 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.222974 kubelet[2980]: W0416 04:06:04.170232 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.222974 kubelet[2980]: E0416 04:06:04.171309 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.222974 kubelet[2980]: E0416 04:06:04.195511 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.222974 kubelet[2980]: W0416 04:06:04.195536 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.275473 kubelet[2980]: E0416 04:06:04.195564 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.275473 kubelet[2980]: E0416 04:06:04.210550 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.275473 kubelet[2980]: W0416 04:06:04.210659 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.275473 kubelet[2980]: E0416 04:06:04.210710 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.275473 kubelet[2980]: E0416 04:06:04.216642 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.275473 kubelet[2980]: W0416 04:06:04.216819 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.275473 kubelet[2980]: E0416 04:06:04.216934 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.275473 kubelet[2980]: E0416 04:06:04.226602 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.275473 kubelet[2980]: W0416 04:06:04.226631 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.275473 kubelet[2980]: E0416 04:06:04.226659 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.279726 kubelet[2980]: E0416 04:06:04.226889 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.279726 kubelet[2980]: W0416 04:06:04.226898 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.279726 kubelet[2980]: E0416 04:06:04.226909 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.279726 kubelet[2980]: E0416 04:06:04.227047 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.279726 kubelet[2980]: W0416 04:06:04.227054 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.279726 kubelet[2980]: E0416 04:06:04.227064 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.279726 kubelet[2980]: E0416 04:06:04.227252 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.279726 kubelet[2980]: W0416 04:06:04.227262 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.279726 kubelet[2980]: E0416 04:06:04.227273 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.279726 kubelet[2980]: E0416 04:06:04.227453 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.280071 kubelet[2980]: W0416 04:06:04.227463 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.280071 kubelet[2980]: E0416 04:06:04.227475 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.280071 kubelet[2980]: E0416 04:06:04.227615 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.280071 kubelet[2980]: W0416 04:06:04.227623 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.280071 kubelet[2980]: E0416 04:06:04.227640 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.280071 kubelet[2980]: E0416 04:06:04.227784 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.280071 kubelet[2980]: W0416 04:06:04.227791 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.280071 kubelet[2980]: E0416 04:06:04.227800 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.280071 kubelet[2980]: E0416 04:06:04.227923 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.280071 kubelet[2980]: W0416 04:06:04.227931 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.295708 kubelet[2980]: E0416 04:06:04.227940 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.295708 kubelet[2980]: E0416 04:06:04.228062 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.295708 kubelet[2980]: W0416 04:06:04.228069 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.295708 kubelet[2980]: E0416 04:06:04.228080 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.412502 kubelet[2980]: E0416 04:06:04.410967 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.412502 kubelet[2980]: W0416 04:06:04.411355 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.412502 kubelet[2980]: E0416 04:06:04.412288 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.646901 kubelet[2980]: E0416 04:06:04.642350 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.646901 kubelet[2980]: W0416 04:06:04.642426 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.646901 kubelet[2980]: E0416 04:06:04.642551 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.757429 kubelet[2980]: E0416 04:06:04.747163 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.770583 kubelet[2980]: W0416 04:06:04.747924 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.770583 kubelet[2980]: E0416 04:06:04.759195 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.770583 kubelet[2980]: E0416 04:06:04.768250 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.770583 kubelet[2980]: W0416 04:06:04.768431 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.770583 kubelet[2980]: E0416 04:06:04.768478 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:04.865326 kubelet[2980]: E0416 04:06:04.860250 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:04.865326 kubelet[2980]: W0416 04:06:04.860298 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:04.865326 kubelet[2980]: E0416 04:06:04.860467 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:05.089665 kubelet[2980]: E0416 04:06:05.080943 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:05.089665 kubelet[2980]: W0416 04:06:05.089563 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:05.134361 kubelet[2980]: E0416 04:06:05.089820 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:05.134361 kubelet[2980]: E0416 04:06:05.133021 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:05.134361 kubelet[2980]: W0416 04:06:05.133747 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:05.134361 kubelet[2980]: E0416 04:06:05.133792 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:05.178526 kubelet[2980]: E0416 04:06:05.166218 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:05.178593 containerd[1575]: time="2026-04-16T04:06:05.166795532Z" level=info msg="connecting to shim d274b66e3a7d173ca74ea653fd497e7a000bb57082609d9eaa03b24e2dc7ade0" address="unix:///run/containerd/s/aeabc3715f557963c617c8591f62e432aca8901fc2a59ed43a1f9f47d5f9452d" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:06:05.178967 kubelet[2980]: W0416 04:06:05.178675 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:05.178967 kubelet[2980]: E0416 04:06:05.178738 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:05.196054 kubelet[2980]: E0416 04:06:05.195399 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:05.196054 kubelet[2980]: W0416 04:06:05.195693 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:05.196054 kubelet[2980]: E0416 04:06:05.195830 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:05.261221 kubelet[2980]: E0416 04:06:05.260694 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:05.261221 kubelet[2980]: W0416 04:06:05.260856 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:05.261221 kubelet[2980]: E0416 04:06:05.260983 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:05.377519 kubelet[2980]: E0416 04:06:05.357961 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:05.377519 kubelet[2980]: W0416 04:06:05.358250 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:05.377519 kubelet[2980]: E0416 04:06:05.358486 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:05.389749 kubelet[2980]: E0416 04:06:05.388878 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:05.389749 kubelet[2980]: W0416 04:06:05.388933 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:05.389749 kubelet[2980]: E0416 04:06:05.388999 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:05.544226 kubelet[2980]: E0416 04:06:05.400478 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:05.544226 kubelet[2980]: W0416 04:06:05.542855 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:05.544226 kubelet[2980]: E0416 04:06:05.543007 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:05.560792 kubelet[2980]: E0416 04:06:05.556298 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:05.560792 kubelet[2980]: W0416 04:06:05.556335 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:05.560792 kubelet[2980]: E0416 04:06:05.556384 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:05.983394 kubelet[2980]: E0416 04:06:05.982227 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.001823 kubelet[2980]: W0416 04:06:05.982286 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.002458 kubelet[2980]: E0416 04:06:06.001874 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.003216 kubelet[2980]: E0416 04:06:05.975430 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:06.004402 kubelet[2980]: E0416 04:06:06.004276 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.004402 kubelet[2980]: W0416 04:06:06.004314 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.004402 kubelet[2980]: E0416 04:06:06.004339 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.094277 kubelet[2980]: E0416 04:06:06.092994 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.135882 kubelet[2980]: W0416 04:06:06.093708 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.135882 kubelet[2980]: E0416 04:06:06.106722 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.135882 kubelet[2980]: E0416 04:06:06.110634 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.135882 kubelet[2980]: W0416 04:06:06.110671 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.135882 kubelet[2980]: E0416 04:06:06.110857 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.186521 kubelet[2980]: E0416 04:06:06.185813 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.415471 kubelet[2980]: W0416 04:06:06.198038 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.415471 kubelet[2980]: E0416 04:06:06.267491 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.415471 kubelet[2980]: E0416 04:06:06.336906 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.415471 kubelet[2980]: W0416 04:06:06.391357 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.415471 kubelet[2980]: E0416 04:06:06.391804 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.430477 kubelet[2980]: E0416 04:06:06.412209 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.430477 kubelet[2980]: W0416 04:06:06.427642 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.430477 kubelet[2980]: E0416 04:06:06.428005 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.449294 kubelet[2980]: E0416 04:06:06.438328 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.449294 kubelet[2980]: W0416 04:06:06.438358 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.449294 kubelet[2980]: E0416 04:06:06.438384 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.449294 kubelet[2980]: E0416 04:06:06.443805 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.449294 kubelet[2980]: W0416 04:06:06.443907 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.449294 kubelet[2980]: E0416 04:06:06.444002 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.449294 kubelet[2980]: E0416 04:06:06.444352 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.449294 kubelet[2980]: W0416 04:06:06.444363 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.449294 kubelet[2980]: E0416 04:06:06.444375 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.520248 kubelet[2980]: E0416 04:06:06.519918 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.520248 kubelet[2980]: W0416 04:06:06.520171 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.520248 kubelet[2980]: E0416 04:06:06.520244 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.528400 kubelet[2980]: E0416 04:06:06.520810 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.528400 kubelet[2980]: W0416 04:06:06.520824 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.528400 kubelet[2980]: E0416 04:06:06.520841 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.528400 kubelet[2980]: E0416 04:06:06.528354 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.528400 kubelet[2980]: W0416 04:06:06.528378 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.537171 kubelet[2980]: E0416 04:06:06.535812 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.564215 kubelet[2980]: I0416 04:06:06.549791 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6bb8af70-d3bd-4282-a3de-bea0ffd9b767-kubelet-dir\") pod \"csi-node-driver-r69h8\" (UID: \"6bb8af70-d3bd-4282-a3de-bea0ffd9b767\") " pod="calico-system/csi-node-driver-r69h8" Apr 16 04:06:06.697924 kubelet[2980]: E0416 04:06:06.642399 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.697924 kubelet[2980]: W0416 04:06:06.692623 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.697924 kubelet[2980]: E0416 04:06:06.692656 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.697924 kubelet[2980]: I0416 04:06:06.692717 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6bb8af70-d3bd-4282-a3de-bea0ffd9b767-registration-dir\") pod \"csi-node-driver-r69h8\" (UID: \"6bb8af70-d3bd-4282-a3de-bea0ffd9b767\") " pod="calico-system/csi-node-driver-r69h8" Apr 16 04:06:06.873133 kubelet[2980]: E0416 04:06:06.871500 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.873133 kubelet[2980]: W0416 04:06:06.871544 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.873133 kubelet[2980]: E0416 04:06:06.871603 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.897461 kubelet[2980]: E0416 04:06:06.875358 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.897461 kubelet[2980]: W0416 04:06:06.875383 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.897461 kubelet[2980]: E0416 04:06:06.875407 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.939236 systemd[1]: Started cri-containerd-d274b66e3a7d173ca74ea653fd497e7a000bb57082609d9eaa03b24e2dc7ade0.scope - libcontainer container d274b66e3a7d173ca74ea653fd497e7a000bb57082609d9eaa03b24e2dc7ade0. Apr 16 04:06:06.984265 kubelet[2980]: E0416 04:06:06.940328 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.984265 kubelet[2980]: W0416 04:06:06.940374 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.984265 kubelet[2980]: E0416 04:06:06.942079 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.984265 kubelet[2980]: E0416 04:06:06.976714 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.984265 kubelet[2980]: W0416 04:06:06.976809 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.984265 kubelet[2980]: E0416 04:06:06.976932 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.984265 kubelet[2980]: E0416 04:06:06.979967 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.984265 kubelet[2980]: W0416 04:06:06.979987 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.984265 kubelet[2980]: E0416 04:06:06.980011 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.984265 kubelet[2980]: E0416 04:06:06.980342 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.985132 kubelet[2980]: W0416 04:06:06.980357 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.985132 kubelet[2980]: E0416 04:06:06.980370 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.985132 kubelet[2980]: E0416 04:06:06.982076 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.985132 kubelet[2980]: W0416 04:06:06.982116 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.985132 kubelet[2980]: E0416 04:06:06.982130 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.985132 kubelet[2980]: E0416 04:06:06.983343 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.985132 kubelet[2980]: W0416 04:06:06.983356 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:06.985132 kubelet[2980]: E0416 04:06:06.983371 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:06.985132 kubelet[2980]: E0416 04:06:06.984674 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:06.985132 kubelet[2980]: W0416 04:06:06.984689 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.022671 kubelet[2980]: E0416 04:06:06.984718 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.022671 kubelet[2980]: E0416 04:06:06.984956 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.022671 kubelet[2980]: W0416 04:06:06.984966 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.022671 kubelet[2980]: E0416 04:06:06.984977 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.022671 kubelet[2980]: E0416 04:06:06.985168 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.022671 kubelet[2980]: W0416 04:06:06.985177 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.022671 kubelet[2980]: E0416 04:06:06.985188 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.022671 kubelet[2980]: E0416 04:06:06.985310 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.022671 kubelet[2980]: W0416 04:06:06.985317 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.022671 kubelet[2980]: E0416 04:06:06.985325 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.090154 kubelet[2980]: E0416 04:06:06.985498 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.090154 kubelet[2980]: W0416 04:06:06.985507 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.090154 kubelet[2980]: E0416 04:06:06.985521 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.090154 kubelet[2980]: E0416 04:06:07.056987 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.090154 kubelet[2980]: W0416 04:06:07.057145 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.094570 kubelet[2980]: E0416 04:06:07.057335 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.102349 kubelet[2980]: E0416 04:06:07.101931 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.102349 kubelet[2980]: W0416 04:06:07.101989 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.102349 kubelet[2980]: E0416 04:06:07.102192 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.139937 kubelet[2980]: E0416 04:06:07.138977 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.139937 kubelet[2980]: W0416 04:06:07.139116 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.139937 kubelet[2980]: E0416 04:06:07.139235 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.142500 kubelet[2980]: E0416 04:06:07.142469 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.142629 kubelet[2980]: W0416 04:06:07.142612 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.142698 kubelet[2980]: E0416 04:06:07.142687 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.142806 kubelet[2980]: I0416 04:06:07.142758 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6bb8af70-d3bd-4282-a3de-bea0ffd9b767-socket-dir\") pod \"csi-node-driver-r69h8\" (UID: \"6bb8af70-d3bd-4282-a3de-bea0ffd9b767\") " pod="calico-system/csi-node-driver-r69h8" Apr 16 04:06:07.143261 kubelet[2980]: E0416 04:06:07.143239 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.143344 kubelet[2980]: W0416 04:06:07.143332 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.143396 kubelet[2980]: E0416 04:06:07.143389 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.143586 kubelet[2980]: I0416 04:06:07.143568 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhkwp\" (UniqueName: \"kubernetes.io/projected/6bb8af70-d3bd-4282-a3de-bea0ffd9b767-kube-api-access-nhkwp\") pod \"csi-node-driver-r69h8\" (UID: \"6bb8af70-d3bd-4282-a3de-bea0ffd9b767\") " pod="calico-system/csi-node-driver-r69h8" Apr 16 04:06:07.143805 kubelet[2980]: E0416 04:06:07.143790 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.143869 kubelet[2980]: W0416 04:06:07.143859 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.143936 kubelet[2980]: E0416 04:06:07.143925 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.144160 kubelet[2980]: E0416 04:06:07.144147 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.144235 kubelet[2980]: W0416 04:06:07.144225 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.144287 kubelet[2980]: E0416 04:06:07.144279 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.144499 kubelet[2980]: E0416 04:06:07.144488 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.144578 kubelet[2980]: W0416 04:06:07.144569 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.144632 kubelet[2980]: E0416 04:06:07.144623 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.144692 kubelet[2980]: I0416 04:06:07.144677 2980 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6bb8af70-d3bd-4282-a3de-bea0ffd9b767-varrun\") pod \"csi-node-driver-r69h8\" (UID: \"6bb8af70-d3bd-4282-a3de-bea0ffd9b767\") " pod="calico-system/csi-node-driver-r69h8" Apr 16 04:06:07.144931 kubelet[2980]: E0416 04:06:07.144918 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.144994 kubelet[2980]: W0416 04:06:07.144986 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.145039 kubelet[2980]: E0416 04:06:07.145031 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.169651 kubelet[2980]: E0416 04:06:07.169208 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.169651 kubelet[2980]: W0416 04:06:07.169299 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.189190 kubelet[2980]: E0416 04:06:07.169747 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.189190 kubelet[2980]: E0416 04:06:07.189012 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.189190 kubelet[2980]: W0416 04:06:07.189182 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.189949 kubelet[2980]: E0416 04:06:07.189473 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.192354 kubelet[2980]: E0416 04:06:07.190277 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.192354 kubelet[2980]: W0416 04:06:07.190330 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.192354 kubelet[2980]: E0416 04:06:07.190347 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.223728 kubelet[2980]: E0416 04:06:07.223314 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.223728 kubelet[2980]: W0416 04:06:07.223455 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.223728 kubelet[2980]: E0416 04:06:07.223718 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.366017 kubelet[2980]: E0416 04:06:07.295773 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.366017 kubelet[2980]: W0416 04:06:07.295813 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.366017 kubelet[2980]: E0416 04:06:07.295858 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.366017 kubelet[2980]: E0416 04:06:07.321363 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.366017 kubelet[2980]: W0416 04:06:07.323754 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.366017 kubelet[2980]: E0416 04:06:07.324044 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.366017 kubelet[2980]: E0416 04:06:07.339353 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.366017 kubelet[2980]: W0416 04:06:07.339381 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.366017 kubelet[2980]: E0416 04:06:07.341657 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.366017 kubelet[2980]: E0416 04:06:07.347233 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.597377 kubelet[2980]: W0416 04:06:07.347258 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.597377 kubelet[2980]: E0416 04:06:07.347286 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.597377 kubelet[2980]: E0416 04:06:07.382057 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.597377 kubelet[2980]: W0416 04:06:07.382157 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.597377 kubelet[2980]: E0416 04:06:07.382323 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.597377 kubelet[2980]: E0416 04:06:07.383062 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.597377 kubelet[2980]: W0416 04:06:07.383128 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.597377 kubelet[2980]: E0416 04:06:07.383150 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.597377 kubelet[2980]: E0416 04:06:07.383491 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.597377 kubelet[2980]: W0416 04:06:07.383504 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.477551 sshd-session[5025]: pam_unix(sshd:session): session closed for user core Apr 16 04:06:07.619662 sshd[5121]: Connection closed by 10.0.0.1 port 52046 Apr 16 04:06:07.627911 kubelet[2980]: E0416 04:06:07.383520 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.627911 kubelet[2980]: E0416 04:06:07.384067 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.627911 kubelet[2980]: W0416 04:06:07.384079 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.627911 kubelet[2980]: E0416 04:06:07.387307 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.627911 kubelet[2980]: E0416 04:06:07.458926 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.627911 kubelet[2980]: W0416 04:06:07.458954 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.627911 kubelet[2980]: E0416 04:06:07.459178 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.627911 kubelet[2980]: E0416 04:06:07.563043 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.627911 kubelet[2980]: W0416 04:06:07.563074 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.627911 kubelet[2980]: E0416 04:06:07.563154 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.629656 kubelet[2980]: E0416 04:06:07.597809 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:07.629656 kubelet[2980]: E0416 04:06:07.628662 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.629656 kubelet[2980]: W0416 04:06:07.628699 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.629656 kubelet[2980]: E0416 04:06:07.628798 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.636130 kubelet[2980]: E0416 04:06:07.635601 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.636130 kubelet[2980]: W0416 04:06:07.635634 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.636130 kubelet[2980]: E0416 04:06:07.635741 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.636509 kubelet[2980]: E0416 04:06:07.636488 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.636962 kubelet[2980]: W0416 04:06:07.636942 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.637058 kubelet[2980]: E0416 04:06:07.637046 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.644728 kubelet[2980]: E0416 04:06:07.644119 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.648799 kubelet[2980]: W0416 04:06:07.648700 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.649022 kubelet[2980]: E0416 04:06:07.648930 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.649559 kubelet[2980]: E0416 04:06:07.649542 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.650078 kubelet[2980]: W0416 04:06:07.650054 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.650226 kubelet[2980]: E0416 04:06:07.650211 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.651390 kubelet[2980]: E0416 04:06:07.651374 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.651501 kubelet[2980]: W0416 04:06:07.651487 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.651560 kubelet[2980]: E0416 04:06:07.651550 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.651919 kubelet[2980]: E0416 04:06:07.651906 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.651999 kubelet[2980]: W0416 04:06:07.651989 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.652051 kubelet[2980]: E0416 04:06:07.652042 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.691833 kubelet[2980]: E0416 04:06:07.688597 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.718787 systemd[1]: sshd@50-10.0.0.115:22-10.0.0.1:52046.service: Deactivated successfully. Apr 16 04:06:07.723403 kubelet[2980]: W0416 04:06:07.692316 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.724001 kubelet[2980]: E0416 04:06:07.723816 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.724618 kubelet[2980]: E0416 04:06:07.724597 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.724705 kubelet[2980]: W0416 04:06:07.724690 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.724887 kubelet[2980]: E0416 04:06:07.724872 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.725152 kubelet[2980]: E0416 04:06:07.725139 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.725214 kubelet[2980]: W0416 04:06:07.725205 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.725258 kubelet[2980]: E0416 04:06:07.725249 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.783236 kubelet[2980]: E0416 04:06:07.783118 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.783839 kubelet[2980]: W0416 04:06:07.783809 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.784229 kubelet[2980]: E0416 04:06:07.784140 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.784856 systemd[1]: session-51.scope: Deactivated successfully. Apr 16 04:06:07.790852 systemd-logind[1549]: Session 51 logged out. Waiting for processes to exit. Apr 16 04:06:07.798160 kubelet[2980]: E0416 04:06:07.791731 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.798160 kubelet[2980]: W0416 04:06:07.791766 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.798160 kubelet[2980]: E0416 04:06:07.791792 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.858973 kubelet[2980]: E0416 04:06:07.858447 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.858973 kubelet[2980]: W0416 04:06:07.858955 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.858973 kubelet[2980]: E0416 04:06:07.859051 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:07.887751 kubelet[2980]: E0416 04:06:07.887293 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:07.887751 kubelet[2980]: W0416 04:06:07.887327 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:07.887751 kubelet[2980]: E0416 04:06:07.887364 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:08.000795 systemd-logind[1549]: Removed session 51. Apr 16 04:06:08.783699 kubelet[2980]: E0416 04:06:08.782333 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:08.809371 kubelet[2980]: W0416 04:06:08.782416 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:08.809371 kubelet[2980]: E0416 04:06:08.808157 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:09.119175 containerd[1575]: time="2026-04-16T04:06:09.109234307Z" level=error msg="get state for d274b66e3a7d173ca74ea653fd497e7a000bb57082609d9eaa03b24e2dc7ade0" error="context deadline exceeded" Apr 16 04:06:09.119175 containerd[1575]: time="2026-04-16T04:06:09.113800113Z" level=warning msg="unknown status" status=0 Apr 16 04:06:09.334596 kubelet[2980]: E0416 04:06:09.157449 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:06:09.583304 containerd[1575]: time="2026-04-16T04:06:09.573440534Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 04:06:09.596848 kubelet[2980]: E0416 04:06:09.596653 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:09.949616 containerd[1575]: time="2026-04-16T04:06:09.946905022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kgtx5,Uid:89b5fbad-4c87-4aac-9951-121c09bbd556,Namespace:calico-system,Attempt:0,} returns sandbox id \"d274b66e3a7d173ca74ea653fd497e7a000bb57082609d9eaa03b24e2dc7ade0\"" Apr 16 04:06:10.126540 containerd[1575]: time="2026-04-16T04:06:10.124737528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 16 04:06:11.617588 kubelet[2980]: E0416 04:06:11.608028 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:12.807886 systemd[1]: Started sshd@51-10.0.0.115:22-10.0.0.1:34282.service - OpenSSH per-connection server daemon (10.0.0.1:34282). Apr 16 04:06:13.588381 kubelet[2980]: E0416 04:06:13.588255 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:14.301013 kubelet[2980]: E0416 04:06:14.295762 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:06:14.901617 sshd[5306]: Accepted publickey for core from 10.0.0.1 port 34282 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:06:15.005826 sshd-session[5306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:06:15.258240 systemd-logind[1549]: New session 52 of user core. Apr 16 04:06:15.458250 systemd[1]: Started session-52.scope - Session 52 of User core. Apr 16 04:06:15.756678 kubelet[2980]: E0416 04:06:15.722954 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:16.008793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1284602665.mount: Deactivated successfully. Apr 16 04:06:17.615176 kubelet[2980]: E0416 04:06:17.600548 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:17.759873 sshd[5314]: Connection closed by 10.0.0.1 port 34282 Apr 16 04:06:17.841585 sshd-session[5306]: pam_unix(sshd:session): session closed for user core Apr 16 04:06:18.063700 systemd[1]: sshd@51-10.0.0.115:22-10.0.0.1:34282.service: Deactivated successfully. Apr 16 04:06:18.112012 systemd-logind[1549]: Session 52 logged out. Waiting for processes to exit. Apr 16 04:06:18.177495 systemd[1]: session-52.scope: Deactivated successfully. Apr 16 04:06:18.204208 systemd-logind[1549]: Removed session 52. Apr 16 04:06:19.350921 containerd[1575]: time="2026-04-16T04:06:19.341040135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:06:19.482027 containerd[1575]: time="2026-04-16T04:06:19.377226212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Apr 16 04:06:19.483760 containerd[1575]: time="2026-04-16T04:06:19.483559213Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:06:19.484596 kubelet[2980]: E0416 04:06:19.484536 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:06:19.528562 containerd[1575]: time="2026-04-16T04:06:19.519250365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:06:19.528562 containerd[1575]: time="2026-04-16T04:06:19.520348967Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 9.395266455s" Apr 16 04:06:19.528562 containerd[1575]: time="2026-04-16T04:06:19.520388291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 16 04:06:19.573722 kubelet[2980]: E0416 04:06:19.572843 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:19.850747 containerd[1575]: time="2026-04-16T04:06:19.842759422Z" level=info msg="CreateContainer within sandbox \"d274b66e3a7d173ca74ea653fd497e7a000bb57082609d9eaa03b24e2dc7ade0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 16 04:06:20.097187 containerd[1575]: time="2026-04-16T04:06:20.089394727Z" level=info msg="Container a772054b1662d5b14b77334d8c2cca1a22305ffd5ae226cb04472873a79046be: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:06:20.280254 containerd[1575]: time="2026-04-16T04:06:20.263079565Z" level=info msg="CreateContainer within sandbox \"d274b66e3a7d173ca74ea653fd497e7a000bb57082609d9eaa03b24e2dc7ade0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a772054b1662d5b14b77334d8c2cca1a22305ffd5ae226cb04472873a79046be\"" Apr 16 04:06:20.502239 containerd[1575]: time="2026-04-16T04:06:20.497636678Z" level=info msg="StartContainer for \"a772054b1662d5b14b77334d8c2cca1a22305ffd5ae226cb04472873a79046be\"" Apr 16 04:06:20.654529 containerd[1575]: time="2026-04-16T04:06:20.641870301Z" level=info msg="connecting to shim a772054b1662d5b14b77334d8c2cca1a22305ffd5ae226cb04472873a79046be" address="unix:///run/containerd/s/aeabc3715f557963c617c8591f62e432aca8901fc2a59ed43a1f9f47d5f9452d" protocol=ttrpc version=3 Apr 16 04:06:21.576545 systemd[1]: Started cri-containerd-a772054b1662d5b14b77334d8c2cca1a22305ffd5ae226cb04472873a79046be.scope - libcontainer container a772054b1662d5b14b77334d8c2cca1a22305ffd5ae226cb04472873a79046be. Apr 16 04:06:21.695786 kubelet[2980]: E0416 04:06:21.590970 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:23.351890 systemd[1]: Started sshd@52-10.0.0.115:22-10.0.0.1:32880.service - OpenSSH per-connection server daemon (10.0.0.1:32880). Apr 16 04:06:23.549759 containerd[1575]: time="2026-04-16T04:06:23.549236995Z" level=error msg="get state for a772054b1662d5b14b77334d8c2cca1a22305ffd5ae226cb04472873a79046be" error="context deadline exceeded" Apr 16 04:06:23.549759 containerd[1575]: time="2026-04-16T04:06:23.549516824Z" level=warning msg="unknown status" status=0 Apr 16 04:06:23.587567 kubelet[2980]: E0416 04:06:23.587071 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:24.562305 kubelet[2980]: E0416 04:06:24.538748 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:06:24.785050 sshd[5361]: Accepted publickey for core from 10.0.0.1 port 32880 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:06:24.847325 sshd-session[5361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:06:25.234678 systemd-logind[1549]: New session 53 of user core. Apr 16 04:06:25.372156 systemd[1]: Started session-53.scope - Session 53 of User core. Apr 16 04:06:25.650053 kubelet[2980]: E0416 04:06:25.592995 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:25.841203 containerd[1575]: time="2026-04-16T04:06:25.817422997Z" level=error msg="get state for a772054b1662d5b14b77334d8c2cca1a22305ffd5ae226cb04472873a79046be" error="context deadline exceeded" Apr 16 04:06:25.841203 containerd[1575]: time="2026-04-16T04:06:25.818033350Z" level=warning msg="unknown status" status=0 Apr 16 04:06:26.450901 containerd[1575]: time="2026-04-16T04:06:26.422985878Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 16 04:06:26.867346 containerd[1575]: time="2026-04-16T04:06:26.587009943Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 16 04:06:27.581803 kubelet[2980]: E0416 04:06:27.573945 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:28.060017 containerd[1575]: time="2026-04-16T04:06:28.059554344Z" level=info msg="StartContainer for \"a772054b1662d5b14b77334d8c2cca1a22305ffd5ae226cb04472873a79046be\" returns successfully" Apr 16 04:06:28.562685 kubelet[2980]: E0416 04:06:28.561067 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:28.562685 kubelet[2980]: W0416 04:06:28.561864 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:28.562685 kubelet[2980]: E0416 04:06:28.561983 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:28.647195 kubelet[2980]: E0416 04:06:28.595015 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:28.647195 kubelet[2980]: W0416 04:06:28.597620 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:28.647195 kubelet[2980]: E0416 04:06:28.597685 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:28.647195 kubelet[2980]: E0416 04:06:28.612912 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:28.647195 kubelet[2980]: W0416 04:06:28.617895 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:28.647195 kubelet[2980]: E0416 04:06:28.618141 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:28.647195 kubelet[2980]: E0416 04:06:28.619275 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:28.647195 kubelet[2980]: W0416 04:06:28.619291 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:28.647195 kubelet[2980]: E0416 04:06:28.619309 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:28.647195 kubelet[2980]: E0416 04:06:28.619572 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:28.648125 kubelet[2980]: W0416 04:06:28.619581 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:28.648125 kubelet[2980]: E0416 04:06:28.619591 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:28.648125 kubelet[2980]: E0416 04:06:28.619703 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:28.648125 kubelet[2980]: W0416 04:06:28.619709 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:28.648125 kubelet[2980]: E0416 04:06:28.619716 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:28.648125 kubelet[2980]: E0416 04:06:28.619830 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:28.648125 kubelet[2980]: W0416 04:06:28.619837 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:28.648125 kubelet[2980]: E0416 04:06:28.619844 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:28.648125 kubelet[2980]: E0416 04:06:28.619955 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:28.648125 kubelet[2980]: W0416 04:06:28.619961 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:28.869497 kubelet[2980]: E0416 04:06:28.619968 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:28.869497 kubelet[2980]: E0416 04:06:28.620197 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:28.869497 kubelet[2980]: W0416 04:06:28.620206 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:28.869497 kubelet[2980]: E0416 04:06:28.620215 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:28.869497 kubelet[2980]: E0416 04:06:28.620409 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:28.869497 kubelet[2980]: W0416 04:06:28.620416 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:28.869497 kubelet[2980]: E0416 04:06:28.620423 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:28.869497 kubelet[2980]: E0416 04:06:28.644137 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:28.869497 kubelet[2980]: W0416 04:06:28.644256 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:28.869497 kubelet[2980]: E0416 04:06:28.644443 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:28.917402 kubelet[2980]: E0416 04:06:28.672869 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:28.917402 kubelet[2980]: W0416 04:06:28.672991 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:28.917402 kubelet[2980]: E0416 04:06:28.673205 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:28.922737 kubelet[2980]: E0416 04:06:28.922542 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:28.922737 kubelet[2980]: W0416 04:06:28.922657 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:28.922737 kubelet[2980]: E0416 04:06:28.922692 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:28.923813 kubelet[2980]: E0416 04:06:28.922952 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:28.923813 kubelet[2980]: W0416 04:06:28.922999 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:28.923813 kubelet[2980]: E0416 04:06:28.923018 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:28.932807 kubelet[2980]: E0416 04:06:28.925015 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:28.932807 kubelet[2980]: W0416 04:06:28.925034 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:28.932807 kubelet[2980]: E0416 04:06:28.925050 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:28.932807 kubelet[2980]: E0416 04:06:28.928342 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:28.932807 kubelet[2980]: W0416 04:06:28.928360 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:28.932807 kubelet[2980]: E0416 04:06:28.928376 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:29.071479 kubelet[2980]: E0416 04:06:29.057068 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:29.071479 kubelet[2980]: W0416 04:06:29.057220 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:29.071479 kubelet[2980]: E0416 04:06:29.057340 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:29.283806 kubelet[2980]: E0416 04:06:29.261565 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:29.283806 kubelet[2980]: W0416 04:06:29.261823 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:29.283806 kubelet[2980]: E0416 04:06:29.262066 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:29.359264 kubelet[2980]: E0416 04:06:29.341812 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:29.359264 kubelet[2980]: W0416 04:06:29.342069 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:29.359264 kubelet[2980]: E0416 04:06:29.342827 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:29.693287 kubelet[2980]: E0416 04:06:29.579790 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:29.693287 kubelet[2980]: E0416 04:06:29.650306 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:29.693287 kubelet[2980]: W0416 04:06:29.650372 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:29.693287 kubelet[2980]: E0416 04:06:29.688131 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:29.843655 kubelet[2980]: E0416 04:06:29.842376 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:06:29.843655 kubelet[2980]: E0416 04:06:29.842804 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:29.843655 kubelet[2980]: W0416 04:06:29.842818 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:29.843655 kubelet[2980]: E0416 04:06:29.842902 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:29.977123 kubelet[2980]: E0416 04:06:29.894165 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:29.977123 kubelet[2980]: W0416 04:06:29.943236 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:29.977123 kubelet[2980]: E0416 04:06:29.943285 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:30.047695 kubelet[2980]: E0416 04:06:30.045435 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:30.047695 kubelet[2980]: W0416 04:06:30.046017 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:30.106741 kubelet[2980]: E0416 04:06:30.046065 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:30.150137 kubelet[2980]: E0416 04:06:30.144834 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:30.150137 kubelet[2980]: W0416 04:06:30.144882 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:30.150137 kubelet[2980]: E0416 04:06:30.144915 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:30.207938 kubelet[2980]: E0416 04:06:30.207250 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:30.207938 kubelet[2980]: W0416 04:06:30.207385 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:30.233483 kubelet[2980]: E0416 04:06:30.207508 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:30.254071 kubelet[2980]: E0416 04:06:30.249240 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:30.254071 kubelet[2980]: W0416 04:06:30.249454 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:30.404039 kubelet[2980]: E0416 04:06:30.253370 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:30.477049 systemd[1]: cri-containerd-a772054b1662d5b14b77334d8c2cca1a22305ffd5ae226cb04472873a79046be.scope: Deactivated successfully. Apr 16 04:06:30.478814 kubelet[2980]: E0416 04:06:30.311286 2980 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 16 04:06:30.478814 kubelet[2980]: W0416 04:06:30.477504 2980 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 16 04:06:30.478814 kubelet[2980]: E0416 04:06:30.477636 2980 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 16 04:06:30.697253 containerd[1575]: time="2026-04-16T04:06:30.697002061Z" level=info msg="received container exit event container_id:\"a772054b1662d5b14b77334d8c2cca1a22305ffd5ae226cb04472873a79046be\" id:\"a772054b1662d5b14b77334d8c2cca1a22305ffd5ae226cb04472873a79046be\" pid:5351 exited_at:{seconds:1776312390 nanos:593502889}" Apr 16 04:06:31.503800 sshd[5379]: Connection closed by 10.0.0.1 port 32880 Apr 16 04:06:31.576968 sshd-session[5361]: pam_unix(sshd:session): session closed for user core Apr 16 04:06:31.661743 kubelet[2980]: E0416 04:06:31.661606 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:31.871006 systemd[1]: sshd@52-10.0.0.115:22-10.0.0.1:32880.service: Deactivated successfully. Apr 16 04:06:32.161420 systemd[1]: session-53.scope: Deactivated successfully. Apr 16 04:06:32.189405 systemd[1]: session-53.scope: Consumed 1.123s CPU time, 15.7M memory peak. Apr 16 04:06:32.478860 systemd-logind[1549]: Session 53 logged out. Waiting for processes to exit. Apr 16 04:06:32.700475 systemd-logind[1549]: Removed session 53. Apr 16 04:06:33.237503 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a772054b1662d5b14b77334d8c2cca1a22305ffd5ae226cb04472873a79046be-rootfs.mount: Deactivated successfully. Apr 16 04:06:33.595676 kubelet[2980]: E0416 04:06:33.577583 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:34.695452 containerd[1575]: time="2026-04-16T04:06:34.693967452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 16 04:06:34.840949 kubelet[2980]: E0416 04:06:34.832768 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:34.847664 kubelet[2980]: E0416 04:06:34.846003 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:06:36.689717 kubelet[2980]: E0416 04:06:36.687614 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:36.892857 systemd[1]: Started sshd@53-10.0.0.115:22-10.0.0.1:38688.service - OpenSSH per-connection server daemon (10.0.0.1:38688). Apr 16 04:06:38.129871 sshd[5475]: Accepted publickey for core from 10.0.0.1 port 38688 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:06:38.553993 sshd-session[5475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:06:39.018277 kubelet[2980]: E0416 04:06:39.003669 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:39.094819 systemd-logind[1549]: New session 54 of user core. Apr 16 04:06:39.118389 systemd[1]: Started session-54.scope - Session 54 of User core. Apr 16 04:06:39.896891 kubelet[2980]: E0416 04:06:39.887661 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:06:40.571021 kubelet[2980]: E0416 04:06:40.570689 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:40.624988 kubelet[2980]: E0416 04:06:40.623213 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:06:41.339148 sshd[5478]: Connection closed by 10.0.0.1 port 38688 Apr 16 04:06:41.347621 sshd-session[5475]: pam_unix(sshd:session): session closed for user core Apr 16 04:06:41.766114 systemd[1]: sshd@53-10.0.0.115:22-10.0.0.1:38688.service: Deactivated successfully. Apr 16 04:06:41.815543 systemd[1]: session-54.scope: Deactivated successfully. Apr 16 04:06:42.002921 systemd-logind[1549]: Session 54 logged out. Waiting for processes to exit. Apr 16 04:06:42.041016 systemd-logind[1549]: Removed session 54. Apr 16 04:06:42.606694 kubelet[2980]: E0416 04:06:42.601155 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:44.583406 kubelet[2980]: E0416 04:06:44.583163 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:44.905506 kubelet[2980]: E0416 04:06:44.899003 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:06:45.632320 kubelet[2980]: E0416 04:06:45.631149 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:06:46.492839 systemd[1]: Started sshd@54-10.0.0.115:22-10.0.0.1:50106.service - OpenSSH per-connection server daemon (10.0.0.1:50106). Apr 16 04:06:46.624173 kubelet[2980]: E0416 04:06:46.604293 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:47.524335 sshd[5495]: Accepted publickey for core from 10.0.0.1 port 50106 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:06:47.526846 sshd-session[5495]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:06:47.677427 systemd-logind[1549]: New session 55 of user core. Apr 16 04:06:47.690812 systemd[1]: Started session-55.scope - Session 55 of User core. Apr 16 04:06:48.578607 kubelet[2980]: E0416 04:06:48.574722 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:49.378554 sshd[5498]: Connection closed by 10.0.0.1 port 50106 Apr 16 04:06:49.414425 sshd-session[5495]: pam_unix(sshd:session): session closed for user core Apr 16 04:06:49.487440 systemd[1]: sshd@54-10.0.0.115:22-10.0.0.1:50106.service: Deactivated successfully. Apr 16 04:06:49.535455 systemd[1]: session-55.scope: Deactivated successfully. Apr 16 04:06:49.599577 systemd-logind[1549]: Session 55 logged out. Waiting for processes to exit. Apr 16 04:06:49.793809 systemd-logind[1549]: Removed session 55. Apr 16 04:06:49.906301 kubelet[2980]: E0416 04:06:49.905887 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:06:50.576225 kubelet[2980]: E0416 04:06:50.571353 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:52.594245 kubelet[2980]: E0416 04:06:52.581057 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:54.559536 systemd[1]: Started sshd@55-10.0.0.115:22-10.0.0.1:50116.service - OpenSSH per-connection server daemon (10.0.0.1:50116). Apr 16 04:06:54.701778 kubelet[2980]: E0416 04:06:54.701485 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:06:55.119316 kubelet[2980]: E0416 04:06:55.110908 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:06:55.491066 sshd[5518]: Accepted publickey for core from 10.0.0.1 port 50116 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:06:55.658564 sshd-session[5518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:06:55.931750 systemd-logind[1549]: New session 56 of user core. Apr 16 04:06:56.019678 systemd[1]: Started session-56.scope - Session 56 of User core. Apr 16 04:06:56.586464 kubelet[2980]: E0416 04:06:56.580888 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:01.172540 kubelet[2980]: E0416 04:07:01.139875 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.548s" Apr 16 04:07:01.825714 kubelet[2980]: E0416 04:07:01.151944 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:07:02.430316 kubelet[2980]: E0416 04:07:02.413634 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.268s" Apr 16 04:07:02.582709 kubelet[2980]: E0416 04:07:02.451231 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:02.582709 kubelet[2980]: E0416 04:07:02.571516 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:07:04.962061 kubelet[2980]: E0416 04:07:04.883899 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:05.178865 sshd[5521]: Connection closed by 10.0.0.1 port 50116 Apr 16 04:07:05.277595 sshd-session[5518]: pam_unix(sshd:session): session closed for user core Apr 16 04:07:06.278550 systemd[1]: sshd@55-10.0.0.115:22-10.0.0.1:50116.service: Deactivated successfully. Apr 16 04:07:06.330256 systemd[1]: session-56.scope: Deactivated successfully. Apr 16 04:07:06.340582 systemd[1]: session-56.scope: Consumed 3.315s CPU time, 16.3M memory peak. Apr 16 04:07:06.355185 kubelet[2980]: E0416 04:07:06.290052 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:07:06.381748 systemd-logind[1549]: Session 56 logged out. Waiting for processes to exit. Apr 16 04:07:06.392865 kubelet[2980]: E0416 04:07:06.392612 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.512s" Apr 16 04:07:06.397452 systemd-logind[1549]: Removed session 56. Apr 16 04:07:07.195986 kubelet[2980]: E0416 04:07:07.047783 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:09.947845 kubelet[2980]: E0416 04:07:09.888906 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.167s" Apr 16 04:07:10.569269 kubelet[2980]: E0416 04:07:10.089953 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:12.087581 systemd[1]: Started sshd@56-10.0.0.115:22-10.0.0.1:39192.service - OpenSSH per-connection server daemon (10.0.0.1:39192). Apr 16 04:07:12.244035 kubelet[2980]: E0416 04:07:12.214573 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:07:12.343301 kubelet[2980]: E0416 04:07:12.339288 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.561s" Apr 16 04:07:12.343301 kubelet[2980]: E0416 04:07:12.342882 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:18.065923 kubelet[2980]: E0416 04:07:18.065217 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:07:18.656136 kubelet[2980]: E0416 04:07:18.655210 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.058s" Apr 16 04:07:18.661544 kubelet[2980]: E0416 04:07:18.660857 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:19.170468 kubelet[2980]: E0416 04:07:19.121344 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:07:20.791216 kubelet[2980]: E0416 04:07:20.701017 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:21.285338 systemd[1]: cri-containerd-83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929.scope: Deactivated successfully. Apr 16 04:07:21.382306 systemd[1]: cri-containerd-83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929.scope: Consumed 20.222s CPU time, 61.9M memory peak, 784K read from disk. Apr 16 04:07:21.751064 kubelet[2980]: E0416 04:07:21.679156 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.104s" Apr 16 04:07:21.960569 containerd[1575]: time="2026-04-16T04:07:21.738976851Z" level=info msg="received container exit event container_id:\"83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929\" id:\"83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929\" pid:4697 exit_status:1 exited_at:{seconds:1776312441 nanos:675056260}" Apr 16 04:07:22.254013 sshd[5537]: Accepted publickey for core from 10.0.0.1 port 39192 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:07:22.247355 sshd-session[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:07:22.353498 systemd-logind[1549]: New session 57 of user core. Apr 16 04:07:22.485712 systemd[1]: Started session-57.scope - Session 57 of User core. Apr 16 04:07:22.661237 systemd[1]: cri-containerd-d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b.scope: Deactivated successfully. Apr 16 04:07:22.687406 kubelet[2980]: E0416 04:07:22.662244 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:22.663582 systemd[1]: cri-containerd-d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b.scope: Consumed 31.862s CPU time, 95M memory peak. Apr 16 04:07:22.761021 containerd[1575]: time="2026-04-16T04:07:22.747832943Z" level=info msg="received container exit event container_id:\"d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b\" id:\"d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b\" pid:4727 exit_status:1 exited_at:{seconds:1776312442 nanos:747482274}" Apr 16 04:07:23.248265 kubelet[2980]: E0416 04:07:23.239632 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:07:24.637589 kubelet[2980]: E0416 04:07:24.594939 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:26.801549 kubelet[2980]: E0416 04:07:26.764343 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:27.068358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929-rootfs.mount: Deactivated successfully. Apr 16 04:07:27.695964 sshd[5551]: Connection closed by 10.0.0.1 port 39192 Apr 16 04:07:27.726701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b-rootfs.mount: Deactivated successfully. Apr 16 04:07:27.748847 sshd-session[5537]: pam_unix(sshd:session): session closed for user core Apr 16 04:07:28.397629 kubelet[2980]: E0416 04:07:28.396894 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:07:29.125934 systemd[1]: sshd@56-10.0.0.115:22-10.0.0.1:39192.service: Deactivated successfully. Apr 16 04:07:29.196957 systemd[1]: sshd@56-10.0.0.115:22-10.0.0.1:39192.service: Consumed 2.401s CPU time, 3.9M memory peak. Apr 16 04:07:29.285978 systemd[1]: session-57.scope: Deactivated successfully. Apr 16 04:07:29.288537 systemd[1]: session-57.scope: Consumed 1.806s CPU time, 16.6M memory peak. Apr 16 04:07:29.364307 systemd-logind[1549]: Session 57 logged out. Waiting for processes to exit. Apr 16 04:07:29.389587 systemd-logind[1549]: Removed session 57. Apr 16 04:07:29.531192 kubelet[2980]: I0416 04:07:29.530750 2980 scope.go:122] "RemoveContainer" containerID="3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb" Apr 16 04:07:29.540542 kubelet[2980]: E0416 04:07:29.531302 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:29.557250 kubelet[2980]: I0416 04:07:29.541704 2980 scope.go:122] "RemoveContainer" containerID="83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929" Apr 16 04:07:29.557250 kubelet[2980]: E0416 04:07:29.550016 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:07:29.557250 kubelet[2980]: E0416 04:07:29.550928 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 04:07:29.784224 containerd[1575]: time="2026-04-16T04:07:29.751507144Z" level=info msg="RemoveContainer for \"3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb\"" Apr 16 04:07:29.983999 containerd[1575]: time="2026-04-16T04:07:29.967326128Z" level=info msg="RemoveContainer for \"3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb\" returns successfully" Apr 16 04:07:30.690522 kubelet[2980]: I0416 04:07:30.690232 2980 scope.go:122] "RemoveContainer" containerID="7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90" Apr 16 04:07:30.691896 kubelet[2980]: I0416 04:07:30.690795 2980 scope.go:122] "RemoveContainer" containerID="d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b" Apr 16 04:07:30.691896 kubelet[2980]: E0416 04:07:30.691040 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-mwc4j_tigera-operator(1fd5a14c-9f90-43e3-abf1-9685462b990b)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-mwc4j" podUID="1fd5a14c-9f90-43e3-abf1-9685462b990b" Apr 16 04:07:30.720427 containerd[1575]: time="2026-04-16T04:07:30.714564593Z" level=info msg="RemoveContainer for \"7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90\"" Apr 16 04:07:30.914944 containerd[1575]: time="2026-04-16T04:07:30.914190323Z" level=warning msg="container event discarded" container=7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90 type=CONTAINER_STOPPED_EVENT Apr 16 04:07:31.427974 containerd[1575]: time="2026-04-16T04:07:31.408256537Z" level=info msg="RemoveContainer for \"7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90\" returns successfully" Apr 16 04:07:31.623971 kubelet[2980]: E0416 04:07:31.609994 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:32.624844 containerd[1575]: time="2026-04-16T04:07:32.617866807Z" level=warning msg="container event discarded" container=3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb type=CONTAINER_STOPPED_EVENT Apr 16 04:07:32.980927 containerd[1575]: time="2026-04-16T04:07:32.893240784Z" level=warning msg="container event discarded" container=7531e5152986a181564729016f8769cda1e7bf6dc3b8897f596ea901f2e3bda8 type=CONTAINER_DELETED_EVENT Apr 16 04:07:33.126861 systemd[1]: Started sshd@57-10.0.0.115:22-10.0.0.1:46260.service - OpenSSH per-connection server daemon (10.0.0.1:46260). Apr 16 04:07:33.192734 containerd[1575]: time="2026-04-16T04:07:33.192165589Z" level=warning msg="container event discarded" container=8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563 type=CONTAINER_STOPPED_EVENT Apr 16 04:07:33.425428 kubelet[2980]: E0416 04:07:33.416958 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:07:33.668213 kubelet[2980]: E0416 04:07:33.648962 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:34.572522 sshd[5585]: Accepted publickey for core from 10.0.0.1 port 46260 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:07:34.701336 sshd-session[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:07:34.877742 systemd-logind[1549]: New session 58 of user core. Apr 16 04:07:34.906348 containerd[1575]: time="2026-04-16T04:07:34.898980102Z" level=warning msg="container event discarded" container=c58e5c055eb6f68a39f52bf7db5fed7ecf7365dd0766791ee108d54ba7949ccd type=CONTAINER_DELETED_EVENT Apr 16 04:07:34.912938 systemd[1]: Started session-58.scope - Session 58 of User core. Apr 16 04:07:35.579712 kubelet[2980]: E0416 04:07:35.567881 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:36.478123 sshd[5588]: Connection closed by 10.0.0.1 port 46260 Apr 16 04:07:36.495379 containerd[1575]: time="2026-04-16T04:07:36.494202129Z" level=warning msg="container event discarded" container=7423a861bc058671e7333174d225ee192265382e136e1a45bf1d432ea3e46691 type=CONTAINER_DELETED_EVENT Apr 16 04:07:36.527751 sshd-session[5585]: pam_unix(sshd:session): session closed for user core Apr 16 04:07:36.613428 systemd[1]: sshd@57-10.0.0.115:22-10.0.0.1:46260.service: Deactivated successfully. Apr 16 04:07:36.713148 systemd[1]: session-58.scope: Deactivated successfully. Apr 16 04:07:36.740409 systemd-logind[1549]: Session 58 logged out. Waiting for processes to exit. Apr 16 04:07:36.742684 systemd-logind[1549]: Removed session 58. Apr 16 04:07:37.612074 kubelet[2980]: E0416 04:07:37.611177 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:38.661010 kubelet[2980]: E0416 04:07:38.660569 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:07:39.575773 kubelet[2980]: E0416 04:07:39.573597 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:39.672058 kubelet[2980]: I0416 04:07:39.662552 2980 scope.go:122] "RemoveContainer" containerID="83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929" Apr 16 04:07:39.868482 kubelet[2980]: E0416 04:07:39.846842 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:07:40.059614 kubelet[2980]: E0416 04:07:40.029781 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 04:07:41.713299 kubelet[2980]: E0416 04:07:41.708581 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:42.669283 systemd[1]: Started sshd@58-10.0.0.115:22-10.0.0.1:39140.service - OpenSSH per-connection server daemon (10.0.0.1:39140). Apr 16 04:07:43.737683 kubelet[2980]: E0416 04:07:43.729938 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:44.464715 kubelet[2980]: E0416 04:07:44.245805 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:07:45.966410 kubelet[2980]: E0416 04:07:45.960809 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.388s" Apr 16 04:07:45.966410 kubelet[2980]: E0416 04:07:45.961386 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:47.992958 kubelet[2980]: E0416 04:07:47.977817 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:49.741921 kubelet[2980]: E0416 04:07:49.738447 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:07:50.476936 sshd[5602]: Accepted publickey for core from 10.0.0.1 port 39140 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:07:51.051403 kubelet[2980]: E0416 04:07:51.051144 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.432s" Apr 16 04:07:51.681283 sshd-session[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:07:51.991291 systemd-logind[1549]: New session 59 of user core. Apr 16 04:07:52.114916 systemd[1]: Started session-59.scope - Session 59 of User core. Apr 16 04:07:52.395046 containerd[1575]: time="2026-04-16T04:07:52.129468750Z" level=warning msg="container event discarded" container=461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142 type=CONTAINER_CREATED_EVENT Apr 16 04:07:52.948478 containerd[1575]: time="2026-04-16T04:07:52.843893278Z" level=warning msg="container event discarded" container=461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142 type=CONTAINER_STARTED_EVENT Apr 16 04:07:52.976402 kubelet[2980]: E0416 04:07:52.971987 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.921s" Apr 16 04:07:53.465716 kubelet[2980]: E0416 04:07:53.383671 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:07:55.366543 kubelet[2980]: E0416 04:07:55.348195 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:07:57.064117 kubelet[2980]: E0416 04:07:57.055659 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.477s" Apr 16 04:07:59.994696 kubelet[2980]: E0416 04:07:59.991458 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.895s" Apr 16 04:08:01.209297 kubelet[2980]: E0416 04:08:01.183397 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:08:01.893923 kubelet[2980]: E0416 04:08:01.892606 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.703s" Apr 16 04:08:02.094611 kubelet[2980]: E0416 04:08:02.086925 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:02.176418 sshd[5609]: Connection closed by 10.0.0.1 port 39140 Apr 16 04:08:02.188871 sshd-session[5602]: pam_unix(sshd:session): session closed for user core Apr 16 04:08:02.386610 kubelet[2980]: E0416 04:08:02.259251 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:08:02.772583 systemd[1]: sshd@58-10.0.0.115:22-10.0.0.1:39140.service: Deactivated successfully. Apr 16 04:08:02.864595 systemd[1]: sshd@58-10.0.0.115:22-10.0.0.1:39140.service: Consumed 2.334s CPU time, 3.2M memory peak. Apr 16 04:08:03.377197 systemd[1]: session-59.scope: Deactivated successfully. Apr 16 04:08:03.523048 systemd[1]: session-59.scope: Consumed 5.018s CPU time, 15.9M memory peak. Apr 16 04:08:03.738774 systemd-logind[1549]: Session 59 logged out. Waiting for processes to exit. Apr 16 04:08:04.183881 systemd-logind[1549]: Removed session 59. Apr 16 04:08:04.550193 kubelet[2980]: E0416 04:08:04.517461 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.65s" Apr 16 04:08:04.974850 kubelet[2980]: E0416 04:08:04.928677 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:05.700507 kubelet[2980]: E0416 04:08:05.694052 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.117s" Apr 16 04:08:06.370270 kubelet[2980]: E0416 04:08:06.354904 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:08:06.575367 kubelet[2980]: E0416 04:08:06.574894 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:06.678689 kubelet[2980]: E0416 04:08:06.647891 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:08:08.251679 systemd[1]: Started sshd@59-10.0.0.115:22-10.0.0.1:42378.service - OpenSSH per-connection server daemon (10.0.0.1:42378). Apr 16 04:08:08.862394 kubelet[2980]: E0416 04:08:08.858854 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:11.998716 kubelet[2980]: E0416 04:08:11.988597 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:08:12.333065 kubelet[2980]: E0416 04:08:12.062263 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.484s" Apr 16 04:08:14.153135 kubelet[2980]: E0416 04:08:14.150753 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.883s" Apr 16 04:08:16.097050 kubelet[2980]: E0416 04:08:16.089917 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.939s" Apr 16 04:08:17.069877 kubelet[2980]: E0416 04:08:17.043716 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:08:17.304142 kubelet[2980]: E0416 04:08:17.132142 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.002s" Apr 16 04:08:18.159767 kubelet[2980]: E0416 04:08:18.129898 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:18.663796 kubelet[2980]: E0416 04:08:18.635369 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:08:21.390341 kubelet[2980]: E0416 04:08:21.388532 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.82s" Apr 16 04:08:21.543986 kubelet[2980]: E0416 04:08:21.536973 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:22.084979 sshd[5625]: Accepted publickey for core from 10.0.0.1 port 42378 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:08:22.201446 kubelet[2980]: E0416 04:08:22.110975 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:08:22.404870 sshd-session[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:08:22.769020 kubelet[2980]: E0416 04:08:22.758364 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:23.457572 systemd-logind[1549]: New session 60 of user core. Apr 16 04:08:23.597907 systemd[1]: Started session-60.scope - Session 60 of User core. Apr 16 04:08:24.645757 kubelet[2980]: E0416 04:08:24.622990 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:25.788430 kubelet[2980]: E0416 04:08:25.782499 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.197s" Apr 16 04:08:26.791722 kubelet[2980]: E0416 04:08:26.757842 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:27.437211 kubelet[2980]: E0416 04:08:27.433836 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:08:27.890446 kubelet[2980]: E0416 04:08:27.867108 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.282s" Apr 16 04:08:28.667919 kubelet[2980]: E0416 04:08:28.667280 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:30.382220 kubelet[2980]: E0416 04:08:30.329989 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.672s" Apr 16 04:08:30.573366 kubelet[2980]: E0416 04:08:30.572340 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:31.128865 sshd[5635]: Connection closed by 10.0.0.1 port 42378 Apr 16 04:08:31.253203 sshd-session[5625]: pam_unix(sshd:session): session closed for user core Apr 16 04:08:31.750856 systemd[1]: sshd@59-10.0.0.115:22-10.0.0.1:42378.service: Deactivated successfully. Apr 16 04:08:31.759564 systemd[1]: sshd@59-10.0.0.115:22-10.0.0.1:42378.service: Consumed 4.199s CPU time, 3.5M memory peak. Apr 16 04:08:32.176709 systemd[1]: session-60.scope: Deactivated successfully. Apr 16 04:08:32.194973 systemd[1]: session-60.scope: Consumed 3.463s CPU time, 16.1M memory peak. Apr 16 04:08:32.226299 systemd-logind[1549]: Session 60 logged out. Waiting for processes to exit. Apr 16 04:08:32.353716 systemd-logind[1549]: Removed session 60. Apr 16 04:08:32.571974 kubelet[2980]: E0416 04:08:32.560994 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:08:32.687025 kubelet[2980]: E0416 04:08:32.668027 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:34.614254 kubelet[2980]: E0416 04:08:34.603957 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:35.769273 kubelet[2980]: I0416 04:08:35.768214 2980 scope.go:122] "RemoveContainer" containerID="d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b" Apr 16 04:08:35.769273 kubelet[2980]: E0416 04:08:35.769024 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-mwc4j_tigera-operator(1fd5a14c-9f90-43e3-abf1-9685462b990b)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-mwc4j" podUID="1fd5a14c-9f90-43e3-abf1-9685462b990b" Apr 16 04:08:36.551594 systemd[1]: Started sshd@60-10.0.0.115:22-10.0.0.1:37682.service - OpenSSH per-connection server daemon (10.0.0.1:37682). Apr 16 04:08:37.092748 kubelet[2980]: E0416 04:08:37.089650 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:37.673280 kubelet[2980]: E0416 04:08:37.671776 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:08:38.691306 kubelet[2980]: E0416 04:08:38.687433 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:40.657460 kubelet[2980]: E0416 04:08:40.656796 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:40.993857 sshd[5653]: Accepted publickey for core from 10.0.0.1 port 37682 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:08:41.146355 sshd-session[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:08:41.784069 systemd-logind[1549]: New session 61 of user core. Apr 16 04:08:41.819472 systemd[1]: Started session-61.scope - Session 61 of User core. Apr 16 04:08:42.576970 kubelet[2980]: E0416 04:08:42.576435 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:42.763810 kubelet[2980]: E0416 04:08:42.759893 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:08:45.083315 kubelet[2980]: E0416 04:08:45.057534 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:46.453840 sshd[5656]: Connection closed by 10.0.0.1 port 37682 Apr 16 04:08:46.475851 sshd-session[5653]: pam_unix(sshd:session): session closed for user core Apr 16 04:08:46.557413 systemd[1]: sshd@60-10.0.0.115:22-10.0.0.1:37682.service: Deactivated successfully. Apr 16 04:08:46.567665 systemd[1]: sshd@60-10.0.0.115:22-10.0.0.1:37682.service: Consumed 1.526s CPU time, 3.5M memory peak. Apr 16 04:08:46.729573 kubelet[2980]: E0416 04:08:46.714697 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:46.748388 systemd[1]: session-61.scope: Deactivated successfully. Apr 16 04:08:46.750587 systemd[1]: session-61.scope: Consumed 1.816s CPU time, 16.1M memory peak. Apr 16 04:08:46.815620 systemd-logind[1549]: Session 61 logged out. Waiting for processes to exit. Apr 16 04:08:46.898819 systemd-logind[1549]: Removed session 61. Apr 16 04:08:48.044322 kubelet[2980]: E0416 04:08:48.043608 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:08:49.041689 kubelet[2980]: E0416 04:08:49.040935 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:50.567544 kubelet[2980]: E0416 04:08:50.567145 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:52.198863 systemd[1]: Started sshd@61-10.0.0.115:22-10.0.0.1:35144.service - OpenSSH per-connection server daemon (10.0.0.1:35144). Apr 16 04:08:52.689820 kubelet[2980]: E0416 04:08:52.676280 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:53.219716 kubelet[2980]: E0416 04:08:53.216888 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:08:53.982476 sshd[5673]: Accepted publickey for core from 10.0.0.1 port 35144 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:08:54.003672 sshd-session[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:08:54.586435 systemd-logind[1549]: New session 62 of user core. Apr 16 04:08:54.666856 kubelet[2980]: E0416 04:08:54.658817 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:54.680818 systemd[1]: Started session-62.scope - Session 62 of User core. Apr 16 04:08:56.634655 kubelet[2980]: I0416 04:08:56.630018 2980 scope.go:122] "RemoveContainer" containerID="83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929" Apr 16 04:08:56.785227 kubelet[2980]: E0416 04:08:56.702588 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:08:56.785227 kubelet[2980]: E0416 04:08:56.777434 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:08:56.863227 kubelet[2980]: E0416 04:08:56.839048 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 04:08:57.109344 sshd[5676]: Connection closed by 10.0.0.1 port 35144 Apr 16 04:08:57.165225 sshd-session[5673]: pam_unix(sshd:session): session closed for user core Apr 16 04:08:57.358561 systemd[1]: sshd@61-10.0.0.115:22-10.0.0.1:35144.service: Deactivated successfully. Apr 16 04:08:57.675273 systemd[1]: session-62.scope: Deactivated successfully. Apr 16 04:08:57.740761 systemd-logind[1549]: Session 62 logged out. Waiting for processes to exit. Apr 16 04:08:57.791906 systemd-logind[1549]: Removed session 62. Apr 16 04:08:58.318477 kubelet[2980]: E0416 04:08:58.317633 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:08:58.771835 kubelet[2980]: E0416 04:08:58.771477 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:00.600992 kubelet[2980]: E0416 04:09:00.593348 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:02.627545 kubelet[2980]: E0416 04:09:02.626936 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:02.723645 systemd[1]: Started sshd@62-10.0.0.115:22-10.0.0.1:44144.service - OpenSSH per-connection server daemon (10.0.0.1:44144). Apr 16 04:09:03.456346 kubelet[2980]: E0416 04:09:03.445008 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:09:03.463989 containerd[1575]: time="2026-04-16T04:09:03.458075353Z" level=warning msg="container event discarded" container=83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929 type=CONTAINER_CREATED_EVENT Apr 16 04:09:04.834976 kubelet[2980]: E0416 04:09:04.828513 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:05.375680 sshd[5689]: Accepted publickey for core from 10.0.0.1 port 44144 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:09:05.476486 sshd-session[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:09:06.271298 systemd-logind[1549]: New session 63 of user core. Apr 16 04:09:06.342701 systemd[1]: Started session-63.scope - Session 63 of User core. Apr 16 04:09:06.681356 containerd[1575]: time="2026-04-16T04:09:06.674366371Z" level=warning msg="container event discarded" container=d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b type=CONTAINER_CREATED_EVENT Apr 16 04:09:06.775248 kubelet[2980]: E0416 04:09:06.774642 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:08.655887 kubelet[2980]: E0416 04:09:08.653335 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:08.693521 kubelet[2980]: E0416 04:09:08.669243 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:09:09.346373 sshd[5692]: Connection closed by 10.0.0.1 port 44144 Apr 16 04:09:09.353152 sshd-session[5689]: pam_unix(sshd:session): session closed for user core Apr 16 04:09:09.516834 systemd[1]: sshd@62-10.0.0.115:22-10.0.0.1:44144.service: Deactivated successfully. Apr 16 04:09:09.585139 systemd[1]: session-63.scope: Deactivated successfully. Apr 16 04:09:09.641074 systemd[1]: session-63.scope: Consumed 1.288s CPU time, 18.4M memory peak. Apr 16 04:09:09.816507 systemd-logind[1549]: Session 63 logged out. Waiting for processes to exit. Apr 16 04:09:09.854482 systemd-logind[1549]: Removed session 63. Apr 16 04:09:11.021298 kubelet[2980]: E0416 04:09:11.020830 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:12.578696 kubelet[2980]: E0416 04:09:12.577343 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:13.561534 containerd[1575]: time="2026-04-16T04:09:13.556399092Z" level=warning msg="container event discarded" container=83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929 type=CONTAINER_STARTED_EVENT Apr 16 04:09:13.600921 kubelet[2980]: E0416 04:09:13.597774 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:09:13.718304 kubelet[2980]: E0416 04:09:13.700635 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:09:14.762546 kubelet[2980]: E0416 04:09:14.762271 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:14.856414 systemd[1]: Started sshd@63-10.0.0.115:22-10.0.0.1:47718.service - OpenSSH per-connection server daemon (10.0.0.1:47718). Apr 16 04:09:16.294795 sshd[5709]: Accepted publickey for core from 10.0.0.1 port 47718 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:09:16.420461 sshd-session[5709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:09:16.655661 kubelet[2980]: E0416 04:09:16.582609 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:16.947661 systemd-logind[1549]: New session 64 of user core. Apr 16 04:09:17.040199 systemd[1]: Started session-64.scope - Session 64 of User core. Apr 16 04:09:18.661271 kubelet[2980]: E0416 04:09:18.657792 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:18.910838 kubelet[2980]: E0416 04:09:18.877736 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:09:19.747804 containerd[1575]: time="2026-04-16T04:09:19.736934055Z" level=warning msg="container event discarded" container=d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b type=CONTAINER_STARTED_EVENT Apr 16 04:09:20.770369 kubelet[2980]: E0416 04:09:20.759711 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:21.491885 kubelet[2980]: E0416 04:09:20.770720 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:09:21.777424 kubelet[2980]: E0416 04:09:21.773497 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.188s" Apr 16 04:09:21.966178 sshd[5712]: Connection closed by 10.0.0.1 port 47718 Apr 16 04:09:21.960785 sshd-session[5709]: pam_unix(sshd:session): session closed for user core Apr 16 04:09:22.034462 systemd[1]: sshd@63-10.0.0.115:22-10.0.0.1:47718.service: Deactivated successfully. Apr 16 04:09:22.064431 systemd[1]: session-64.scope: Deactivated successfully. Apr 16 04:09:22.081914 systemd[1]: session-64.scope: Consumed 1.540s CPU time, 18.7M memory peak. Apr 16 04:09:22.248068 systemd-logind[1549]: Session 64 logged out. Waiting for processes to exit. Apr 16 04:09:22.263564 systemd-logind[1549]: Removed session 64. Apr 16 04:09:22.605629 kubelet[2980]: E0416 04:09:22.598214 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:24.030863 kubelet[2980]: E0416 04:09:23.985855 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:09:24.578851 kubelet[2980]: E0416 04:09:24.573956 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:26.695681 kubelet[2980]: E0416 04:09:26.691071 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:28.066989 systemd[1]: Started sshd@64-10.0.0.115:22-10.0.0.1:43024.service - OpenSSH per-connection server daemon (10.0.0.1:43024). Apr 16 04:09:28.912350 kubelet[2980]: E0416 04:09:28.909922 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:29.272453 kubelet[2980]: E0416 04:09:29.185909 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:09:29.758190 kubelet[2980]: E0416 04:09:29.753362 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.094s" Apr 16 04:09:31.147247 kubelet[2980]: E0416 04:09:31.124851 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:31.707800 sshd[5730]: Accepted publickey for core from 10.0.0.1 port 43024 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:09:31.716714 sshd-session[5730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:09:32.184650 systemd-logind[1549]: New session 65 of user core. Apr 16 04:09:32.273154 systemd[1]: Started session-65.scope - Session 65 of User core. Apr 16 04:09:32.646929 kubelet[2980]: E0416 04:09:32.635522 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:09:32.666707 kubelet[2980]: E0416 04:09:32.666336 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:34.643032 kubelet[2980]: E0416 04:09:34.635389 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:09:34.886780 kubelet[2980]: E0416 04:09:34.660229 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:36.679209 kubelet[2980]: E0416 04:09:36.657061 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:36.802977 sshd[5733]: Connection closed by 10.0.0.1 port 43024 Apr 16 04:09:36.805858 sshd-session[5730]: pam_unix(sshd:session): session closed for user core Apr 16 04:09:36.953922 systemd[1]: sshd@64-10.0.0.115:22-10.0.0.1:43024.service: Deactivated successfully. Apr 16 04:09:36.992874 systemd[1]: session-65.scope: Deactivated successfully. Apr 16 04:09:36.999782 systemd[1]: session-65.scope: Consumed 1.541s CPU time, 18.4M memory peak. Apr 16 04:09:37.035151 systemd-logind[1549]: Session 65 logged out. Waiting for processes to exit. Apr 16 04:09:37.044288 systemd-logind[1549]: Removed session 65. Apr 16 04:09:38.678217 kubelet[2980]: E0416 04:09:38.676541 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:39.708070 kubelet[2980]: E0416 04:09:39.702161 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:09:40.706405 kubelet[2980]: E0416 04:09:40.701590 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:42.204861 systemd[1]: Started sshd@65-10.0.0.115:22-10.0.0.1:47872.service - OpenSSH per-connection server daemon (10.0.0.1:47872). Apr 16 04:09:42.733002 kubelet[2980]: E0416 04:09:42.599947 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:44.578435 kubelet[2980]: E0416 04:09:44.577059 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:44.597505 sshd[5748]: Accepted publickey for core from 10.0.0.1 port 47872 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:09:44.600274 sshd-session[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:09:44.749790 kubelet[2980]: E0416 04:09:44.749566 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:09:44.870546 systemd-logind[1549]: New session 66 of user core. Apr 16 04:09:44.905492 systemd[1]: Started session-66.scope - Session 66 of User core. Apr 16 04:09:46.579857 kubelet[2980]: E0416 04:09:46.571446 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:47.993430 sshd[5751]: Connection closed by 10.0.0.1 port 47872 Apr 16 04:09:48.059038 sshd-session[5748]: pam_unix(sshd:session): session closed for user core Apr 16 04:09:48.445406 systemd[1]: sshd@65-10.0.0.115:22-10.0.0.1:47872.service: Deactivated successfully. Apr 16 04:09:48.591386 kubelet[2980]: E0416 04:09:48.589469 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:48.619253 systemd[1]: session-66.scope: Deactivated successfully. Apr 16 04:09:48.632958 systemd[1]: session-66.scope: Consumed 1.270s CPU time, 15M memory peak. Apr 16 04:09:48.829275 systemd-logind[1549]: Session 66 logged out. Waiting for processes to exit. Apr 16 04:09:49.072837 systemd-logind[1549]: Removed session 66. Apr 16 04:09:49.775576 kubelet[2980]: E0416 04:09:49.775211 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:09:50.668461 kubelet[2980]: E0416 04:09:50.668021 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:52.795398 kubelet[2980]: E0416 04:09:52.792432 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:53.481175 systemd[1]: Started sshd@66-10.0.0.115:22-10.0.0.1:34882.service - OpenSSH per-connection server daemon (10.0.0.1:34882). Apr 16 04:09:54.837642 kubelet[2980]: E0416 04:09:54.832745 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:09:55.217636 kubelet[2980]: E0416 04:09:55.217420 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:55.979550 sshd[5766]: Accepted publickey for core from 10.0.0.1 port 34882 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:09:56.644152 sshd-session[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:09:56.990936 kubelet[2980]: E0416 04:09:56.975958 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:57.492416 systemd-logind[1549]: New session 67 of user core. Apr 16 04:09:57.684050 systemd[1]: Started session-67.scope - Session 67 of User core. Apr 16 04:09:58.634301 kubelet[2980]: E0416 04:09:58.633490 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:09:59.114576 sshd[5769]: Connection closed by 10.0.0.1 port 34882 Apr 16 04:09:59.189237 sshd-session[5766]: pam_unix(sshd:session): session closed for user core Apr 16 04:09:59.231833 systemd[1]: sshd@66-10.0.0.115:22-10.0.0.1:34882.service: Deactivated successfully. Apr 16 04:09:59.349546 systemd[1]: session-67.scope: Deactivated successfully. Apr 16 04:09:59.370408 systemd-logind[1549]: Session 67 logged out. Waiting for processes to exit. Apr 16 04:09:59.381256 systemd-logind[1549]: Removed session 67. Apr 16 04:09:59.850006 kubelet[2980]: E0416 04:09:59.849422 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:10:00.577715 kubelet[2980]: E0416 04:10:00.576651 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:02.775150 kubelet[2980]: I0416 04:10:02.771325 2980 scope.go:122] "RemoveContainer" containerID="83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929" Apr 16 04:10:03.169840 kubelet[2980]: I0416 04:10:02.832332 2980 scope.go:122] "RemoveContainer" containerID="d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b" Apr 16 04:10:03.169840 kubelet[2980]: E0416 04:10:02.772462 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:03.169840 kubelet[2980]: E0416 04:10:02.879487 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:10:03.272337 containerd[1575]: time="2026-04-16T04:10:03.269331909Z" level=info msg="CreateContainer within sandbox \"02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:6,}" Apr 16 04:10:03.496475 containerd[1575]: time="2026-04-16T04:10:03.476630186Z" level=info msg="CreateContainer within sandbox \"c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db\" for container &ContainerMetadata{Name:tigera-operator,Attempt:6,}" Apr 16 04:10:03.822653 containerd[1575]: time="2026-04-16T04:10:03.781869609Z" level=info msg="Container 23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:10:03.964765 containerd[1575]: time="2026-04-16T04:10:03.964679862Z" level=info msg="Container 1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:10:03.971305 containerd[1575]: time="2026-04-16T04:10:03.970839607Z" level=info msg="CreateContainer within sandbox \"c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db\" for &ContainerMetadata{Name:tigera-operator,Attempt:6,} returns container id \"23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b\"" Apr 16 04:10:03.973165 containerd[1575]: time="2026-04-16T04:10:03.973065390Z" level=info msg="StartContainer for \"23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b\"" Apr 16 04:10:04.010287 containerd[1575]: time="2026-04-16T04:10:04.003873087Z" level=info msg="connecting to shim 23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b" address="unix:///run/containerd/s/b40817c4c3b3e5498badbc035a393ccaaa43aaaa06e8111d2e4d4485037a2b06" protocol=ttrpc version=3 Apr 16 04:10:04.116598 containerd[1575]: time="2026-04-16T04:10:04.114976004Z" level=info msg="CreateContainer within sandbox \"02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:6,} returns container id \"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\"" Apr 16 04:10:04.116598 containerd[1575]: time="2026-04-16T04:10:04.116191760Z" level=info msg="StartContainer for \"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\"" Apr 16 04:10:04.119718 containerd[1575]: time="2026-04-16T04:10:04.117707162Z" level=info msg="connecting to shim 1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090" address="unix:///run/containerd/s/b0f2c5cfffdebf676e7ed85c3328df6a87775c2b04620a5f0b47a494ee449f34" protocol=ttrpc version=3 Apr 16 04:10:05.033054 systemd[1]: Started sshd@67-10.0.0.115:22-10.0.0.1:37936.service - OpenSSH per-connection server daemon (10.0.0.1:37936). Apr 16 04:10:06.237514 kubelet[2980]: E0416 04:10:06.231033 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.655s" Apr 16 04:10:06.582492 kubelet[2980]: E0416 04:10:06.538331 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:10:08.502142 kubelet[2980]: E0416 04:10:08.497346 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.756s" Apr 16 04:10:09.440331 systemd[1]: Started cri-containerd-23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b.scope - libcontainer container 23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b. Apr 16 04:10:10.043654 kubelet[2980]: E0416 04:10:10.006260 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.416s" Apr 16 04:10:10.650991 kubelet[2980]: E0416 04:10:10.625791 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:11.865785 kubelet[2980]: E0416 04:10:11.851910 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:10:11.930418 systemd[1]: Started cri-containerd-1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090.scope - libcontainer container 1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090. Apr 16 04:10:12.106791 containerd[1575]: time="2026-04-16T04:10:11.914950845Z" level=error msg="get state for 23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b" error="context deadline exceeded" Apr 16 04:10:12.106791 containerd[1575]: time="2026-04-16T04:10:11.974750539Z" level=warning msg="unknown status" status=0 Apr 16 04:10:12.276319 sshd[5792]: Accepted publickey for core from 10.0.0.1 port 37936 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:10:12.337959 sshd-session[5792]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:10:13.607921 kubelet[2980]: E0416 04:10:13.573936 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:14.360958 containerd[1575]: time="2026-04-16T04:10:14.352169817Z" level=error msg="get state for 23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b" error="context deadline exceeded" Apr 16 04:10:14.455232 kubelet[2980]: E0416 04:10:14.370030 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.581s" Apr 16 04:10:14.375881 systemd-logind[1549]: New session 68 of user core. Apr 16 04:10:14.746264 containerd[1575]: time="2026-04-16T04:10:14.480372937Z" level=warning msg="unknown status" status=0 Apr 16 04:10:14.666298 systemd[1]: Started session-68.scope - Session 68 of User core. Apr 16 04:10:15.501686 kubelet[2980]: E0416 04:10:15.499357 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:17.055691 containerd[1575]: time="2026-04-16T04:10:17.051046159Z" level=error msg="get state for 23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b" error="context deadline exceeded" Apr 16 04:10:17.055691 containerd[1575]: time="2026-04-16T04:10:17.053429468Z" level=warning msg="unknown status" status=0 Apr 16 04:10:17.463133 kubelet[2980]: E0416 04:10:17.436964 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:10:17.903385 kubelet[2980]: E0416 04:10:17.831980 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.251s" Apr 16 04:10:18.042383 kubelet[2980]: E0416 04:10:17.961073 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:18.386065 containerd[1575]: time="2026-04-16T04:10:18.312364092Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 04:10:18.386065 containerd[1575]: time="2026-04-16T04:10:18.387131592Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 16 04:10:18.386065 containerd[1575]: time="2026-04-16T04:10:18.387637449Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 16 04:10:22.147843 kubelet[2980]: E0416 04:10:22.147589 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.503s" Apr 16 04:10:22.162834 containerd[1575]: time="2026-04-16T04:10:22.161062177Z" level=info msg="StartContainer for \"23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b\" returns successfully" Apr 16 04:10:22.192485 kubelet[2980]: E0416 04:10:22.191652 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:22.277542 sshd[5830]: Connection closed by 10.0.0.1 port 37936 Apr 16 04:10:22.280188 sshd-session[5792]: pam_unix(sshd:session): session closed for user core Apr 16 04:10:22.776697 containerd[1575]: time="2026-04-16T04:10:22.386921233Z" level=info msg="StartContainer for \"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\" returns successfully" Apr 16 04:10:23.132053 systemd[1]: sshd@67-10.0.0.115:22-10.0.0.1:37936.service: Deactivated successfully. Apr 16 04:10:23.249551 systemd[1]: sshd@67-10.0.0.115:22-10.0.0.1:37936.service: Consumed 1.876s CPU time, 5.4M memory peak. Apr 16 04:10:23.655175 systemd[1]: session-68.scope: Deactivated successfully. Apr 16 04:10:23.771292 systemd[1]: session-68.scope: Consumed 2.673s CPU time, 18.3M memory peak. Apr 16 04:10:24.236448 kubelet[2980]: E0416 04:10:24.235580 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:10:24.439173 systemd-logind[1549]: Session 68 logged out. Waiting for processes to exit. Apr 16 04:10:24.694639 systemd-logind[1549]: Removed session 68. Apr 16 04:10:26.368446 kubelet[2980]: E0416 04:10:26.367708 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:10:26.597152 kubelet[2980]: E0416 04:10:26.470774 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.84s" Apr 16 04:10:27.556174 kubelet[2980]: E0416 04:10:27.554686 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.083s" Apr 16 04:10:27.832588 kubelet[2980]: E0416 04:10:27.662301 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:27.926945 kubelet[2980]: E0416 04:10:27.666070 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:10:29.547799 systemd[1]: Started sshd@68-10.0.0.115:22-10.0.0.1:57876.service - OpenSSH per-connection server daemon (10.0.0.1:57876). Apr 16 04:10:30.027326 kubelet[2980]: E0416 04:10:29.929781 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.281s" Apr 16 04:10:30.585462 kubelet[2980]: E0416 04:10:30.547542 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:10:31.957116 kubelet[2980]: E0416 04:10:31.935344 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.76s" Apr 16 04:10:32.023161 kubelet[2980]: E0416 04:10:32.022615 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:32.044069 kubelet[2980]: E0416 04:10:32.038164 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:10:32.484582 sshd[5876]: Accepted publickey for core from 10.0.0.1 port 57876 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:10:32.537376 sshd-session[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:10:32.649995 systemd-logind[1549]: New session 69 of user core. Apr 16 04:10:32.754951 systemd[1]: Started session-69.scope - Session 69 of User core. Apr 16 04:10:33.594244 kubelet[2980]: E0416 04:10:33.587071 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:33.620894 kubelet[2980]: E0416 04:10:33.604903 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:10:33.768317 sshd[5879]: Connection closed by 10.0.0.1 port 57876 Apr 16 04:10:33.772313 sshd-session[5876]: pam_unix(sshd:session): session closed for user core Apr 16 04:10:33.803738 systemd[1]: sshd@68-10.0.0.115:22-10.0.0.1:57876.service: Deactivated successfully. Apr 16 04:10:33.836701 systemd[1]: session-69.scope: Deactivated successfully. Apr 16 04:10:33.877656 systemd-logind[1549]: Session 69 logged out. Waiting for processes to exit. Apr 16 04:10:34.145257 systemd-logind[1549]: Removed session 69. Apr 16 04:10:35.597002 kubelet[2980]: E0416 04:10:35.587987 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:35.674434 kubelet[2980]: E0416 04:10:35.674041 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:10:37.593965 kubelet[2980]: E0416 04:10:37.584276 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:39.623575 kubelet[2980]: E0416 04:10:39.578714 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:39.601619 systemd[1]: Started sshd@69-10.0.0.115:22-10.0.0.1:44130.service - OpenSSH per-connection server daemon (10.0.0.1:44130). Apr 16 04:10:41.496081 kubelet[2980]: E0416 04:10:41.408060 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:10:42.062655 kubelet[2980]: E0416 04:10:42.058605 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.447s" Apr 16 04:10:42.075728 kubelet[2980]: E0416 04:10:42.074389 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:42.075728 kubelet[2980]: E0416 04:10:42.075663 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:10:42.981718 sshd[5912]: Accepted publickey for core from 10.0.0.1 port 44130 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:10:43.036581 sshd-session[5912]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:10:43.602242 systemd-logind[1549]: New session 70 of user core. Apr 16 04:10:43.628282 kubelet[2980]: E0416 04:10:43.609363 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:43.651710 systemd[1]: Started session-70.scope - Session 70 of User core. Apr 16 04:10:45.281831 sshd[5915]: Connection closed by 10.0.0.1 port 44130 Apr 16 04:10:45.328017 sshd-session[5912]: pam_unix(sshd:session): session closed for user core Apr 16 04:10:45.579309 systemd[1]: sshd@69-10.0.0.115:22-10.0.0.1:44130.service: Deactivated successfully. Apr 16 04:10:45.592601 kubelet[2980]: E0416 04:10:45.592410 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:45.628957 systemd[1]: session-70.scope: Deactivated successfully. Apr 16 04:10:45.715011 systemd-logind[1549]: Session 70 logged out. Waiting for processes to exit. Apr 16 04:10:45.728543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount17834916.mount: Deactivated successfully. Apr 16 04:10:45.732454 systemd-logind[1549]: Removed session 70. Apr 16 04:10:45.948869 containerd[1575]: time="2026-04-16T04:10:45.947793621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:10:45.960782 containerd[1575]: time="2026-04-16T04:10:45.956982230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 16 04:10:45.960782 containerd[1575]: time="2026-04-16T04:10:45.960558738Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:10:46.063324 containerd[1575]: time="2026-04-16T04:10:46.062566723Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:10:46.065950 containerd[1575]: time="2026-04-16T04:10:46.065173282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 4m11.344711372s" Apr 16 04:10:46.065950 containerd[1575]: time="2026-04-16T04:10:46.065905193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 16 04:10:46.145493 containerd[1575]: time="2026-04-16T04:10:46.138277028Z" level=info msg="CreateContainer within sandbox \"d274b66e3a7d173ca74ea653fd497e7a000bb57082609d9eaa03b24e2dc7ade0\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 16 04:10:46.348433 containerd[1575]: time="2026-04-16T04:10:46.347439885Z" level=info msg="Container 6122b5ae811dde545c074b08a9209c5ee7e62383a1a0ab13a1ae10eb70d64117: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:10:46.632188 kubelet[2980]: E0416 04:10:46.627701 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:10:46.651320 containerd[1575]: time="2026-04-16T04:10:46.648323139Z" level=info msg="CreateContainer within sandbox \"d274b66e3a7d173ca74ea653fd497e7a000bb57082609d9eaa03b24e2dc7ade0\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"6122b5ae811dde545c074b08a9209c5ee7e62383a1a0ab13a1ae10eb70d64117\"" Apr 16 04:10:46.679139 containerd[1575]: time="2026-04-16T04:10:46.678469963Z" level=info msg="StartContainer for \"6122b5ae811dde545c074b08a9209c5ee7e62383a1a0ab13a1ae10eb70d64117\"" Apr 16 04:10:46.746849 containerd[1575]: time="2026-04-16T04:10:46.746750266Z" level=info msg="connecting to shim 6122b5ae811dde545c074b08a9209c5ee7e62383a1a0ab13a1ae10eb70d64117" address="unix:///run/containerd/s/aeabc3715f557963c617c8591f62e432aca8901fc2a59ed43a1f9f47d5f9452d" protocol=ttrpc version=3 Apr 16 04:10:47.195646 systemd[1]: Started cri-containerd-6122b5ae811dde545c074b08a9209c5ee7e62383a1a0ab13a1ae10eb70d64117.scope - libcontainer container 6122b5ae811dde545c074b08a9209c5ee7e62383a1a0ab13a1ae10eb70d64117. Apr 16 04:10:47.618962 kubelet[2980]: E0416 04:10:47.604424 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:48.176452 containerd[1575]: time="2026-04-16T04:10:48.176238349Z" level=info msg="StartContainer for \"6122b5ae811dde545c074b08a9209c5ee7e62383a1a0ab13a1ae10eb70d64117\" returns successfully" Apr 16 04:10:49.583897 kubelet[2980]: E0416 04:10:49.580709 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:51.803019 kubelet[2980]: E0416 04:10:51.794276 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:52.125256 kubelet[2980]: E0416 04:10:51.961048 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:10:52.339117 systemd[1]: Started sshd@70-10.0.0.115:22-10.0.0.1:34816.service - OpenSSH per-connection server daemon (10.0.0.1:34816). Apr 16 04:10:53.378326 kubelet[2980]: E0416 04:10:53.368787 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:10:54.723309 kubelet[2980]: E0416 04:10:54.669143 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.6s" Apr 16 04:10:54.880920 kubelet[2980]: E0416 04:10:54.831069 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:56.790034 kubelet[2980]: E0416 04:10:56.771861 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:57.307494 kubelet[2980]: E0416 04:10:57.294760 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:10:58.978988 kubelet[2980]: E0416 04:10:58.956317 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:10:59.969336 kubelet[2980]: E0416 04:10:59.967246 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.389s" Apr 16 04:11:00.916894 systemd[1]: cri-containerd-23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b.scope: Deactivated successfully. Apr 16 04:11:01.049873 systemd[1]: cri-containerd-23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b.scope: Consumed 3.816s CPU time, 39.3M memory peak, 240K read from disk. Apr 16 04:11:01.145617 containerd[1575]: time="2026-04-16T04:11:01.118690731Z" level=info msg="received container exit event container_id:\"23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b\" id:\"23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b\" pid:5807 exit_status:1 exited_at:{seconds:1776312661 nanos:105433806}" Apr 16 04:11:01.680740 kubelet[2980]: E0416 04:11:01.145346 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:11:02.086687 sshd[5962]: Accepted publickey for core from 10.0.0.1 port 34816 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:11:01.841908 sshd-session[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:11:02.672986 kubelet[2980]: E0416 04:11:02.666997 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:11:03.784477 systemd-logind[1549]: New session 71 of user core. Apr 16 04:11:03.850631 systemd[1]: cri-containerd-1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090.scope: Deactivated successfully. Apr 16 04:11:03.950442 systemd[1]: cri-containerd-1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090.scope: Consumed 4.568s CPU time, 17.5M memory peak, 1.1M read from disk. Apr 16 04:11:04.301640 containerd[1575]: time="2026-04-16T04:11:04.134684864Z" level=info msg="received container exit event container_id:\"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\" id:\"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\" pid:5818 exit_status:1 exited_at:{seconds:1776312663 nanos:851004617}" Apr 16 04:11:04.301886 systemd[1]: Started session-71.scope - Session 71 of User core. Apr 16 04:11:04.345835 kubelet[2980]: E0416 04:11:04.311196 2980 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:10:53Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:10:53Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:10:53Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:10:53Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\\\",\\\"ghcr.io/flatcar/calico/node:v3.31.4\\\"],\\\"sizeBytes\\\":159838426},{\\\"names\\\":[\\\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\\\",\\\"quay.io/tigera/operator:v1.40.7\\\"],\\\"sizeBytes\\\":40842151},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\\\",\\\"registry.k8s.io/kube-apiserver:v1.35.4\\\"],\\\"sizeBytes\\\":27576022},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\\\",\\\"registry.k8s.io/kube-proxy:v1.35.4\\\"],\\\"sizeBytes\\\":25698944},{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\\\",\\\"registry.k8s.io/etcd:3.6.6-0\\\"],\\\"sizeBytes\\\":23641797},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\\\",\\\"registry.k8s.io/coredns/coredns:v1.13.1\\\"],\\\"sizeBytes\\\":23553139},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\\\",\\\"registry.k8s.io/kube-controller-manager:v1.35.4\\\"],\\\"sizeBytes\\\":23018006},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\\\",\\\"registry.k8s.io/kube-scheduler:v1.35.4\\\"],\\\"sizeBytes\\\":17121655},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\\\"],\\\"sizeBytes\\\":6186255},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\\\",\\\"registry.k8s.io/pause:3.10.1\\\"],\\\"sizeBytes\\\":320448},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\\\",\\\"registry.k8s.io/pause:3.10\\\"],\\\"sizeBytes\\\":320368}]}}\" for node \"localhost\": Patch \"https://10.0.0.115:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded" Apr 16 04:11:08.112913 kubelet[2980]: E0416 04:11:08.111789 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:11:08.314954 systemd[1]: cri-containerd-461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142.scope: Deactivated successfully. Apr 16 04:11:08.526418 systemd[1]: cri-containerd-461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142.scope: Consumed 51.479s CPU time, 24.9M memory peak, 516K read from disk. Apr 16 04:11:09.166990 containerd[1575]: time="2026-04-16T04:11:08.739017076Z" level=info msg="received container exit event container_id:\"461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142\" id:\"461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142\" pid:4590 exit_status:1 exited_at:{seconds:1776312668 nanos:587902141}" Apr 16 04:11:09.982877 containerd[1575]: time="2026-04-16T04:11:09.969437476Z" level=warning msg="container event discarded" container=d274b66e3a7d173ca74ea653fd497e7a000bb57082609d9eaa03b24e2dc7ade0 type=CONTAINER_CREATED_EVENT Apr 16 04:11:10.376392 containerd[1575]: time="2026-04-16T04:11:09.995655150Z" level=warning msg="container event discarded" container=d274b66e3a7d173ca74ea653fd497e7a000bb57082609d9eaa03b24e2dc7ade0 type=CONTAINER_STARTED_EVENT Apr 16 04:11:10.893441 kubelet[2980]: E0416 04:11:10.499828 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.875s" Apr 16 04:11:10.893441 kubelet[2980]: E0416 04:11:10.672948 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:11:11.360431 containerd[1575]: time="2026-04-16T04:11:11.354507998Z" level=error msg="failed to handle container TaskExit event container_id:\"23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b\" id:\"23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b\" pid:5807 exit_status:1 exited_at:{seconds:1776312661 nanos:105433806}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 16 04:11:11.383348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b-rootfs.mount: Deactivated successfully. Apr 16 04:11:11.725550 kubelet[2980]: E0416 04:11:11.707353 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:11:12.560581 kubelet[2980]: E0416 04:11:12.552630 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.051s" Apr 16 04:11:13.062161 containerd[1575]: time="2026-04-16T04:11:12.913384422Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 16 04:11:13.495855 containerd[1575]: time="2026-04-16T04:11:13.170458221Z" level=info msg="TaskExit event container_id:\"23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b\" id:\"23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b\" pid:5807 exit_status:1 exited_at:{seconds:1776312661 nanos:105433806}" Apr 16 04:11:14.177436 containerd[1575]: time="2026-04-16T04:11:14.172916406Z" level=error msg="failed to handle container TaskExit event container_id:\"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\" id:\"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\" pid:5818 exit_status:1 exited_at:{seconds:1776312663 nanos:851004617}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 16 04:11:14.969737 kubelet[2980]: E0416 04:11:14.969660 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:11:15.219010 kubelet[2980]: E0416 04:11:15.198899 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.646s" Apr 16 04:11:16.283822 containerd[1575]: time="2026-04-16T04:11:16.277915818Z" level=info msg="Ensure that container 23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b in task-service has been cleanup successfully" Apr 16 04:11:16.792448 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090-rootfs.mount: Deactivated successfully. Apr 16 04:11:17.346514 kubelet[2980]: E0416 04:11:17.104350 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.896s" Apr 16 04:11:17.346514 kubelet[2980]: E0416 04:11:17.158056 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:11:18.256121 containerd[1575]: time="2026-04-16T04:11:16.748842108Z" level=error msg="ttrpc: received message on inactive stream" stream=37 Apr 16 04:11:18.256121 containerd[1575]: time="2026-04-16T04:11:16.953276045Z" level=info msg="TaskExit event container_id:\"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\" id:\"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\" pid:5818 exit_status:1 exited_at:{seconds:1776312663 nanos:851004617}" Apr 16 04:11:18.256121 containerd[1575]: time="2026-04-16T04:11:17.177046832Z" level=error msg="collecting metrics for 23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b" error="ttrpc: closed" Apr 16 04:11:18.492592 kubelet[2980]: E0416 04:11:18.475754 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.315s" Apr 16 04:11:19.036810 containerd[1575]: time="2026-04-16T04:11:19.030751564Z" level=error msg="failed to handle container TaskExit event container_id:\"461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142\" id:\"461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142\" pid:4590 exit_status:1 exited_at:{seconds:1776312668 nanos:587902141}" error="failed to stop container: context deadline exceeded" Apr 16 04:11:19.186835 containerd[1575]: time="2026-04-16T04:11:19.182656570Z" level=error msg="ttrpc: received message on inactive stream" stream=97 Apr 16 04:11:19.320956 containerd[1575]: time="2026-04-16T04:11:19.192644683Z" level=error msg="ttrpc: received message on inactive stream" stream=101 Apr 16 04:11:19.786019 kubelet[2980]: E0416 04:11:19.784592 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.192s" Apr 16 04:11:20.051499 kubelet[2980]: I0416 04:11:19.846234 2980 scope.go:122] "RemoveContainer" containerID="d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b" Apr 16 04:11:20.051499 kubelet[2980]: I0416 04:11:19.975079 2980 scope.go:122] "RemoveContainer" containerID="23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b" Apr 16 04:11:20.051499 kubelet[2980]: E0416 04:11:20.041221 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:11:20.300566 kubelet[2980]: E0416 04:11:20.138901 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-mwc4j_tigera-operator(1fd5a14c-9f90-43e3-abf1-9685462b990b)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-mwc4j" podUID="1fd5a14c-9f90-43e3-abf1-9685462b990b" Apr 16 04:11:20.498711 containerd[1575]: time="2026-04-16T04:11:20.277051904Z" level=warning msg="container event discarded" container=a772054b1662d5b14b77334d8c2cca1a22305ffd5ae226cb04472873a79046be type=CONTAINER_CREATED_EVENT Apr 16 04:11:20.584410 kubelet[2980]: E0416 04:11:20.535997 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:11:21.107519 containerd[1575]: time="2026-04-16T04:11:21.106498394Z" level=info msg="RemoveContainer for \"d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b\"" Apr 16 04:11:22.281493 containerd[1575]: time="2026-04-16T04:11:22.278706977Z" level=info msg="RemoveContainer for \"d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b\" returns successfully" Apr 16 04:11:22.449417 sshd[5975]: Connection closed by 10.0.0.1 port 34816 Apr 16 04:11:22.475385 sshd-session[5962]: pam_unix(sshd:session): session closed for user core Apr 16 04:11:22.945109 kubelet[2980]: E0416 04:11:22.644873 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.056s" Apr 16 04:11:23.298919 systemd[1]: sshd@70-10.0.0.115:22-10.0.0.1:34816.service: Deactivated successfully. Apr 16 04:11:23.434749 systemd[1]: sshd@70-10.0.0.115:22-10.0.0.1:34816.service: Consumed 2.687s CPU time, 5.3M memory peak. Apr 16 04:11:24.159567 systemd[1]: session-71.scope: Deactivated successfully. Apr 16 04:11:24.268570 systemd[1]: session-71.scope: Consumed 5.902s CPU time, 16.4M memory peak. Apr 16 04:11:24.502871 systemd-logind[1549]: Session 71 logged out. Waiting for processes to exit. Apr 16 04:11:24.934187 kubelet[2980]: E0416 04:11:24.670733 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.922s" Apr 16 04:11:25.324513 systemd-logind[1549]: Removed session 71. Apr 16 04:11:25.560199 kubelet[2980]: E0416 04:11:25.506039 2980 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:11:15Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:11:15Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:11:15Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:11:15Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\\\",\\\"ghcr.io/flatcar/calico/node:v3.31.4\\\"],\\\"sizeBytes\\\":159838426},{\\\"names\\\":[\\\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\\\",\\\"quay.io/tigera/operator:v1.40.7\\\"],\\\"sizeBytes\\\":40842151},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\\\",\\\"registry.k8s.io/kube-apiserver:v1.35.4\\\"],\\\"sizeBytes\\\":27576022},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\\\",\\\"registry.k8s.io/kube-proxy:v1.35.4\\\"],\\\"sizeBytes\\\":25698944},{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\\\",\\\"registry.k8s.io/etcd:3.6.6-0\\\"],\\\"sizeBytes\\\":23641797},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\\\",\\\"registry.k8s.io/coredns/coredns:v1.13.1\\\"],\\\"sizeBytes\\\":23553139},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\\\",\\\"registry.k8s.io/kube-controller-manager:v1.35.4\\\"],\\\"sizeBytes\\\":23018006},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\\\",\\\"registry.k8s.io/kube-scheduler:v1.35.4\\\"],\\\"sizeBytes\\\":17121655},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\\\"],\\\"sizeBytes\\\":6186255},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\\\",\\\"registry.k8s.io/pause:3.10.1\\\"],\\\"sizeBytes\\\":320448},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\\\",\\\"registry.k8s.io/pause:3.10\\\"],\\\"sizeBytes\\\":320368}]}}\" for node \"localhost\": Patch \"https://10.0.0.115:6443/api/v1/nodes/localhost/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 16 04:11:26.274028 kubelet[2980]: E0416 04:11:26.185967 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:11:26.986357 containerd[1575]: time="2026-04-16T04:11:26.985636415Z" level=error msg="Failed to handle backOff event container_id:\"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\" id:\"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\" pid:5818 exit_status:1 exited_at:{seconds:1776312663 nanos:851004617} for 1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 16 04:11:27.165557 containerd[1575]: time="2026-04-16T04:11:27.082610975Z" level=info msg="TaskExit event container_id:\"461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142\" id:\"461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142\" pid:4590 exit_status:1 exited_at:{seconds:1776312668 nanos:587902141}" Apr 16 04:11:28.015794 kubelet[2980]: E0416 04:11:28.012746 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.321s" Apr 16 04:11:28.232640 containerd[1575]: time="2026-04-16T04:11:28.058869899Z" level=warning msg="container event discarded" container=a772054b1662d5b14b77334d8c2cca1a22305ffd5ae226cb04472873a79046be type=CONTAINER_STARTED_EVENT Apr 16 04:11:28.945395 systemd[1]: Started sshd@71-10.0.0.115:22-10.0.0.1:54676.service - OpenSSH per-connection server daemon (10.0.0.1:54676). Apr 16 04:11:29.160334 containerd[1575]: time="2026-04-16T04:11:29.159488563Z" level=error msg="ttrpc: received message on inactive stream" stream=53 Apr 16 04:11:29.351498 kubelet[2980]: E0416 04:11:29.174934 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:11:31.458979 kubelet[2980]: E0416 04:11:31.454597 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.028s" Apr 16 04:11:31.503640 kubelet[2980]: E0416 04:11:31.503465 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:11:31.548988 kubelet[2980]: E0416 04:11:31.548934 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:11:32.476894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142-rootfs.mount: Deactivated successfully. Apr 16 04:11:32.589552 containerd[1575]: time="2026-04-16T04:11:32.579903364Z" level=info msg="TaskExit event container_id:\"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\" id:\"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\" pid:5818 exit_status:1 exited_at:{seconds:1776312663 nanos:851004617}" Apr 16 04:11:32.692264 kubelet[2980]: I0416 04:11:32.690456 2980 scope.go:122] "RemoveContainer" containerID="461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142" Apr 16 04:11:32.785331 kubelet[2980]: E0416 04:11:32.696197 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:11:32.785331 kubelet[2980]: E0416 04:11:32.696423 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 16 04:11:32.785331 kubelet[2980]: I0416 04:11:32.697211 2980 scope.go:122] "RemoveContainer" containerID="8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563" Apr 16 04:11:32.785331 kubelet[2980]: E0416 04:11:32.695629 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:11:32.795171 containerd[1575]: time="2026-04-16T04:11:32.790001587Z" level=info msg="RemoveContainer for \"8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563\"" Apr 16 04:11:32.813239 sshd[6030]: Accepted publickey for core from 10.0.0.1 port 54676 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:11:32.815872 sshd-session[6030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:11:32.862938 containerd[1575]: time="2026-04-16T04:11:32.862735476Z" level=info msg="RemoveContainer for \"8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563\" returns successfully" Apr 16 04:11:32.907864 systemd-logind[1549]: New session 72 of user core. Apr 16 04:11:32.920859 systemd[1]: Started session-72.scope - Session 72 of User core. Apr 16 04:11:33.283274 containerd[1575]: time="2026-04-16T04:11:33.283132281Z" level=error msg="collecting metrics for 1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090" error="ttrpc: closed" Apr 16 04:11:33.450297 systemd[1]: cri-containerd-6122b5ae811dde545c074b08a9209c5ee7e62383a1a0ab13a1ae10eb70d64117.scope: Deactivated successfully. Apr 16 04:11:33.467502 systemd[1]: cri-containerd-6122b5ae811dde545c074b08a9209c5ee7e62383a1a0ab13a1ae10eb70d64117.scope: Consumed 4.224s CPU time, 26.3M memory peak. Apr 16 04:11:33.479913 kubelet[2980]: E0416 04:11:33.478195 2980 cadvisor_stats_provider.go:569] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89b5fbad_4c87_4aac_9951_121c09bbd556.slice/cri-containerd-6122b5ae811dde545c074b08a9209c5ee7e62383a1a0ab13a1ae10eb70d64117.scope\": RecentStats: unable to find data in memory cache]" Apr 16 04:11:33.492547 containerd[1575]: time="2026-04-16T04:11:33.485139429Z" level=info msg="received container exit event container_id:\"6122b5ae811dde545c074b08a9209c5ee7e62383a1a0ab13a1ae10eb70d64117\" id:\"6122b5ae811dde545c074b08a9209c5ee7e62383a1a0ab13a1ae10eb70d64117\" pid:5942 exited_at:{seconds:1776312693 nanos:479906883}" Apr 16 04:11:33.572751 containerd[1575]: time="2026-04-16T04:11:33.556490414Z" level=warning msg="container event discarded" container=a772054b1662d5b14b77334d8c2cca1a22305ffd5ae226cb04472873a79046be type=CONTAINER_STOPPED_EVENT Apr 16 04:11:35.068986 kubelet[2980]: E0416 04:11:35.053511 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:11:36.463051 kubelet[2980]: E0416 04:11:36.447970 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.872s" Apr 16 04:11:36.463051 kubelet[2980]: I0416 04:11:36.448653 2980 scope.go:122] "RemoveContainer" containerID="83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929" Apr 16 04:11:36.463051 kubelet[2980]: I0416 04:11:36.453663 2980 scope.go:122] "RemoveContainer" containerID="1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090" Apr 16 04:11:36.463051 kubelet[2980]: E0416 04:11:36.454154 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:11:36.463051 kubelet[2980]: E0416 04:11:36.454366 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 04:11:37.054705 kubelet[2980]: E0416 04:11:37.054485 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:11:37.396559 kubelet[2980]: E0416 04:11:37.383352 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:11:37.830514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6122b5ae811dde545c074b08a9209c5ee7e62383a1a0ab13a1ae10eb70d64117-rootfs.mount: Deactivated successfully. Apr 16 04:11:38.659562 containerd[1575]: time="2026-04-16T04:11:38.653075154Z" level=info msg="RemoveContainer for \"83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929\"" Apr 16 04:11:39.348051 kubelet[2980]: E0416 04:11:39.338935 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.535s" Apr 16 04:11:39.711761 containerd[1575]: time="2026-04-16T04:11:39.669732346Z" level=info msg="RemoveContainer for \"83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929\" returns successfully" Apr 16 04:11:40.113166 kubelet[2980]: E0416 04:11:40.074417 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:11:40.557587 kubelet[2980]: I0416 04:11:40.548695 2980 scope.go:122] "RemoveContainer" containerID="461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142" Apr 16 04:11:40.784398 kubelet[2980]: E0416 04:11:40.558516 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:11:40.870716 kubelet[2980]: E0416 04:11:40.844874 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:11:41.054230 kubelet[2980]: E0416 04:11:41.053456 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 16 04:11:41.154316 kubelet[2980]: I0416 04:11:41.060302 2980 scope.go:122] "RemoveContainer" containerID="1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090" Apr 16 04:11:41.176077 kubelet[2980]: E0416 04:11:41.174849 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:11:41.189296 kubelet[2980]: E0416 04:11:41.187277 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 04:11:41.765835 containerd[1575]: time="2026-04-16T04:11:41.752040267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 16 04:11:42.577508 kubelet[2980]: E0416 04:11:42.552621 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:11:42.577508 kubelet[2980]: E0416 04:11:42.572424 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:11:42.627556 sshd[6058]: Connection closed by 10.0.0.1 port 54676 Apr 16 04:11:42.774443 sshd-session[6030]: pam_unix(sshd:session): session closed for user core Apr 16 04:11:43.455578 systemd[1]: sshd@71-10.0.0.115:22-10.0.0.1:54676.service: Deactivated successfully. Apr 16 04:11:43.936520 systemd[1]: session-72.scope: Deactivated successfully. Apr 16 04:11:43.959481 systemd[1]: session-72.scope: Consumed 4.574s CPU time, 16.1M memory peak. Apr 16 04:11:44.357593 systemd-logind[1549]: Session 72 logged out. Waiting for processes to exit. Apr 16 04:11:44.926423 systemd-logind[1549]: Removed session 72. Apr 16 04:11:45.402954 kubelet[2980]: E0416 04:11:45.382591 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:11:47.825132 kubelet[2980]: E0416 04:11:47.824223 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.222s" Apr 16 04:11:48.174074 kubelet[2980]: E0416 04:11:47.835925 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:11:48.174074 kubelet[2980]: E0416 04:11:48.017789 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:11:50.422323 kubelet[2980]: E0416 04:11:50.412720 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.649s" Apr 16 04:11:50.457320 systemd[1]: Started sshd@72-10.0.0.115:22-10.0.0.1:37734.service - OpenSSH per-connection server daemon (10.0.0.1:37734). Apr 16 04:11:50.770989 kubelet[2980]: E0416 04:11:50.701076 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:11:50.868485 kubelet[2980]: I0416 04:11:50.778788 2980 scope.go:122] "RemoveContainer" containerID="461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142" Apr 16 04:11:50.898914 kubelet[2980]: E0416 04:11:50.881045 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:11:51.357933 containerd[1575]: time="2026-04-16T04:11:51.357772405Z" level=info msg="CreateContainer within sandbox \"b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:4,}" Apr 16 04:11:52.835282 containerd[1575]: time="2026-04-16T04:11:52.834488334Z" level=info msg="Container 0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:11:52.861079 kubelet[2980]: E0416 04:11:52.860439 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:11:52.965710 kubelet[2980]: E0416 04:11:52.965367 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:11:52.993381 containerd[1575]: time="2026-04-16T04:11:52.988692732Z" level=info msg="CreateContainer within sandbox \"b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:4,} returns container id \"0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412\"" Apr 16 04:11:53.028625 containerd[1575]: time="2026-04-16T04:11:53.028528117Z" level=info msg="StartContainer for \"0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412\"" Apr 16 04:11:53.033198 containerd[1575]: time="2026-04-16T04:11:53.033061137Z" level=info msg="connecting to shim 0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412" address="unix:///run/containerd/s/64fc1d346c666396b4a6f4eda52f8f58d8abeacdc8da519fac54d1b45f3029a3" protocol=ttrpc version=3 Apr 16 04:11:53.471426 sshd[6095]: Accepted publickey for core from 10.0.0.1 port 37734 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:11:53.544005 sshd-session[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:11:53.666720 systemd[1]: Started cri-containerd-0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412.scope - libcontainer container 0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412. Apr 16 04:11:53.813603 systemd-logind[1549]: New session 73 of user core. Apr 16 04:11:53.846926 systemd[1]: Started session-73.scope - Session 73 of User core. Apr 16 04:11:54.664298 kubelet[2980]: E0416 04:11:54.662408 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:11:55.883520 containerd[1575]: time="2026-04-16T04:11:55.860428158Z" level=error msg="get state for 0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412" error="context deadline exceeded" Apr 16 04:11:55.883520 containerd[1575]: time="2026-04-16T04:11:55.860923279Z" level=warning msg="unknown status" status=0 Apr 16 04:11:56.603608 containerd[1575]: time="2026-04-16T04:11:56.578643573Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 04:11:56.663720 kubelet[2980]: E0416 04:11:56.660946 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:11:58.379554 containerd[1575]: time="2026-04-16T04:11:58.375033357Z" level=info msg="StartContainer for \"0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412\" returns successfully" Apr 16 04:11:58.838252 kubelet[2980]: E0416 04:11:58.469869 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:11:58.838252 kubelet[2980]: E0416 04:11:58.641951 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:01.061059 kubelet[2980]: E0416 04:12:01.060191 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:01.470379 kubelet[2980]: E0416 04:12:01.444534 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:12:02.378385 kubelet[2980]: E0416 04:12:02.316152 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:12:04.039693 kubelet[2980]: E0416 04:12:04.001722 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:12:04.510349 kubelet[2980]: E0416 04:12:04.365834 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.722s" Apr 16 04:12:04.509329 sshd-session[6095]: pam_unix(sshd:session): session closed for user core Apr 16 04:12:05.054478 sshd[6119]: Connection closed by 10.0.0.1 port 37734 Apr 16 04:12:05.454664 kubelet[2980]: E0416 04:12:04.544401 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:05.993289 systemd[1]: sshd@72-10.0.0.115:22-10.0.0.1:37734.service: Deactivated successfully. Apr 16 04:12:06.505863 systemd[1]: session-73.scope: Deactivated successfully. Apr 16 04:12:06.530712 systemd[1]: session-73.scope: Consumed 3.724s CPU time, 16.1M memory peak. Apr 16 04:12:06.914021 systemd-logind[1549]: Session 73 logged out. Waiting for processes to exit. Apr 16 04:12:06.932944 kubelet[2980]: E0416 04:12:06.860606 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.254s" Apr 16 04:12:07.158883 kubelet[2980]: E0416 04:12:06.961081 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:07.146880 systemd-logind[1549]: Removed session 73. Apr 16 04:12:09.202294 kubelet[2980]: E0416 04:12:08.889979 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.013s" Apr 16 04:12:09.540376 kubelet[2980]: E0416 04:12:09.453059 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:12:10.085775 kubelet[2980]: E0416 04:12:10.075621 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:12:10.255484 kubelet[2980]: E0416 04:12:10.252222 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:12:10.506795 kubelet[2980]: E0416 04:12:10.288446 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:11.091008 systemd[1]: Started sshd@73-10.0.0.115:22-10.0.0.1:51260.service - OpenSSH per-connection server daemon (10.0.0.1:51260). Apr 16 04:12:15.568628 kubelet[2980]: E0416 04:12:15.565408 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:12:16.101144 kubelet[2980]: E0416 04:12:16.099169 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.52s" Apr 16 04:12:16.114066 kubelet[2980]: E0416 04:12:16.113984 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:16.114725 kubelet[2980]: E0416 04:12:16.114667 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:12:16.461281 sshd[6152]: Accepted publickey for core from 10.0.0.1 port 51260 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:12:16.500390 sshd-session[6152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:12:16.658570 systemd-logind[1549]: New session 74 of user core. Apr 16 04:12:16.680599 systemd[1]: Started session-74.scope - Session 74 of User core. Apr 16 04:12:17.514659 kubelet[2980]: E0416 04:12:17.514456 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:12:17.580631 kubelet[2980]: E0416 04:12:17.580233 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:17.782545 sshd[6158]: Connection closed by 10.0.0.1 port 51260 Apr 16 04:12:17.878641 sshd-session[6152]: pam_unix(sshd:session): session closed for user core Apr 16 04:12:18.139320 systemd[1]: sshd@73-10.0.0.115:22-10.0.0.1:51260.service: Deactivated successfully. Apr 16 04:12:18.149992 systemd[1]: sshd@73-10.0.0.115:22-10.0.0.1:51260.service: Consumed 1.570s CPU time, 3.5M memory peak. Apr 16 04:12:18.293384 systemd[1]: session-74.scope: Deactivated successfully. Apr 16 04:12:18.561389 systemd-logind[1549]: Session 74 logged out. Waiting for processes to exit. Apr 16 04:12:18.760577 systemd-logind[1549]: Removed session 74. Apr 16 04:12:19.663952 kubelet[2980]: E0416 04:12:19.653281 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:20.926781 kubelet[2980]: E0416 04:12:20.925769 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:12:21.744490 kubelet[2980]: E0416 04:12:21.666965 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:23.685381 kubelet[2980]: E0416 04:12:23.678596 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:24.123707 systemd[1]: Started sshd@74-10.0.0.115:22-10.0.0.1:40994.service - OpenSSH per-connection server daemon (10.0.0.1:40994). Apr 16 04:12:26.137326 kubelet[2980]: E0416 04:12:26.133662 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:12:26.137326 kubelet[2980]: E0416 04:12:26.137269 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.534s" Apr 16 04:12:26.972147 kubelet[2980]: E0416 04:12:26.185810 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:27.860977 containerd[1575]: time="2026-04-16T04:12:27.789621368Z" level=warning msg="container event discarded" container=83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929 type=CONTAINER_STOPPED_EVENT Apr 16 04:12:29.063819 kubelet[2980]: E0416 04:12:29.061431 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.079s" Apr 16 04:12:29.459999 containerd[1575]: time="2026-04-16T04:12:29.432720602Z" level=warning msg="container event discarded" container=d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b type=CONTAINER_STOPPED_EVENT Apr 16 04:12:29.639492 kubelet[2980]: E0416 04:12:29.631330 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:29.986253 containerd[1575]: time="2026-04-16T04:12:29.983804408Z" level=warning msg="container event discarded" container=3042e128789dbe88f2af993bc31932e7815873ffa6bd108cebe3d1fb399790bb type=CONTAINER_DELETED_EVENT Apr 16 04:12:30.614289 kubelet[2980]: I0416 04:12:30.613749 2980 scope.go:122] "RemoveContainer" containerID="23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b" Apr 16 04:12:30.621138 kubelet[2980]: E0416 04:12:30.621105 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-mwc4j_tigera-operator(1fd5a14c-9f90-43e3-abf1-9685462b990b)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-mwc4j" podUID="1fd5a14c-9f90-43e3-abf1-9685462b990b" Apr 16 04:12:31.236836 kubelet[2980]: E0416 04:12:31.236002 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:12:31.462263 containerd[1575]: time="2026-04-16T04:12:31.449794232Z" level=warning msg="container event discarded" container=7a769431ec8f834e4c83983415c413883258a8210f3e550c0c83dd5472fb3e90 type=CONTAINER_DELETED_EVENT Apr 16 04:12:31.641576 kubelet[2980]: E0416 04:12:31.625357 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:31.670759 sshd[6188]: Accepted publickey for core from 10.0.0.1 port 40994 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:12:32.099460 sshd-session[6188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:12:33.074624 systemd[1]: Started session-75.scope - Session 75 of User core. Apr 16 04:12:33.077450 systemd-logind[1549]: New session 75 of user core. Apr 16 04:12:33.945174 kubelet[2980]: E0416 04:12:33.942119 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:36.039947 kubelet[2980]: E0416 04:12:36.032870 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.463s" Apr 16 04:12:36.404578 kubelet[2980]: E0416 04:12:36.173796 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:37.266964 kubelet[2980]: E0416 04:12:37.265693 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:12:37.868027 kubelet[2980]: E0416 04:12:37.866568 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.186s" Apr 16 04:12:38.149617 kubelet[2980]: E0416 04:12:37.954931 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:39.929255 kubelet[2980]: E0416 04:12:39.925508 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.357s" Apr 16 04:12:40.323057 kubelet[2980]: E0416 04:12:40.175851 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:42.487800 kubelet[2980]: E0416 04:12:42.486150 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:12:42.895853 kubelet[2980]: E0416 04:12:42.605359 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:43.759941 sshd[6191]: Connection closed by 10.0.0.1 port 40994 Apr 16 04:12:43.957056 kubelet[2980]: E0416 04:12:43.845179 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.253s" Apr 16 04:12:43.957056 kubelet[2980]: I0416 04:12:43.847344 2980 scope.go:122] "RemoveContainer" containerID="1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090" Apr 16 04:12:43.957056 kubelet[2980]: E0416 04:12:43.848566 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:43.957056 kubelet[2980]: E0416 04:12:43.850645 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:12:43.957056 kubelet[2980]: E0416 04:12:43.853944 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 04:12:43.940581 sshd-session[6188]: pam_unix(sshd:session): session closed for user core Apr 16 04:12:44.224502 systemd[1]: sshd@74-10.0.0.115:22-10.0.0.1:40994.service: Deactivated successfully. Apr 16 04:12:44.326230 systemd[1]: sshd@74-10.0.0.115:22-10.0.0.1:40994.service: Consumed 2.296s CPU time, 4.3M memory peak. Apr 16 04:12:44.460487 systemd[1]: session-75.scope: Deactivated successfully. Apr 16 04:12:44.473469 systemd[1]: session-75.scope: Consumed 4.598s CPU time, 16.5M memory peak. Apr 16 04:12:44.562808 systemd-logind[1549]: Session 75 logged out. Waiting for processes to exit. Apr 16 04:12:44.839582 systemd-logind[1549]: Removed session 75. Apr 16 04:12:45.655958 kubelet[2980]: E0416 04:12:45.650223 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:47.151141 kubelet[2980]: E0416 04:12:47.118987 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:47.673574 kubelet[2980]: E0416 04:12:47.663472 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:12:49.523523 kubelet[2980]: E0416 04:12:49.512588 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:49.732603 systemd[1]: Started sshd@75-10.0.0.115:22-10.0.0.1:46964.service - OpenSSH per-connection server daemon (10.0.0.1:46964). Apr 16 04:12:52.662375 kubelet[2980]: E0416 04:12:52.660507 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.973s" Apr 16 04:12:53.099139 kubelet[2980]: E0416 04:12:53.059647 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:53.125358 kubelet[2980]: E0416 04:12:53.122891 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:12:53.192948 kubelet[2980]: E0416 04:12:53.131758 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:12:55.247975 kubelet[2980]: E0416 04:12:55.242805 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:56.967488 kubelet[2980]: E0416 04:12:56.965411 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.387s" Apr 16 04:12:57.160926 kubelet[2980]: E0416 04:12:57.156535 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:12:58.888667 kubelet[2980]: E0416 04:12:58.807857 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:12:59.962377 kubelet[2980]: E0416 04:12:59.962301 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:00.207546 sshd[6208]: Accepted publickey for core from 10.0.0.1 port 46964 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:13:00.964022 sshd-session[6208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:13:01.661323 kubelet[2980]: E0416 04:13:01.656228 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.977s" Apr 16 04:13:01.918606 systemd-logind[1549]: New session 76 of user core. Apr 16 04:13:02.060506 systemd[1]: Started session-76.scope - Session 76 of User core. Apr 16 04:13:02.196009 kubelet[2980]: E0416 04:13:01.958123 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:03.577296 kubelet[2980]: E0416 04:13:03.570972 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:04.241731 kubelet[2980]: E0416 04:13:04.234843 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:13:05.883518 kubelet[2980]: E0416 04:13:05.869001 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.302s" Apr 16 04:13:06.041973 kubelet[2980]: E0416 04:13:05.880037 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:07.884804 kubelet[2980]: E0416 04:13:07.881964 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:09.647444 kubelet[2980]: E0416 04:13:09.646730 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:10.003993 kubelet[2980]: E0416 04:13:09.946895 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:13:11.551252 sshd[6215]: Connection closed by 10.0.0.1 port 46964 Apr 16 04:13:11.605054 kubelet[2980]: E0416 04:13:11.596551 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:11.694216 sshd-session[6208]: pam_unix(sshd:session): session closed for user core Apr 16 04:13:11.820864 systemd[1]: sshd@75-10.0.0.115:22-10.0.0.1:46964.service: Deactivated successfully. Apr 16 04:13:11.822179 systemd[1]: sshd@75-10.0.0.115:22-10.0.0.1:46964.service: Consumed 3.310s CPU time, 4.5M memory peak. Apr 16 04:13:11.974042 systemd[1]: session-76.scope: Deactivated successfully. Apr 16 04:13:12.007208 systemd[1]: session-76.scope: Consumed 4.056s CPU time, 16.4M memory peak. Apr 16 04:13:12.061665 systemd-logind[1549]: Session 76 logged out. Waiting for processes to exit. Apr 16 04:13:12.270048 systemd-logind[1549]: Removed session 76. Apr 16 04:13:13.587513 kubelet[2980]: E0416 04:13:13.584761 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:13.764651 kubelet[2980]: E0416 04:13:13.644947 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:13:15.016214 kubelet[2980]: E0416 04:13:15.015615 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:13:15.585340 kubelet[2980]: E0416 04:13:15.583852 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:17.385135 systemd[1]: Started sshd@76-10.0.0.115:22-10.0.0.1:37048.service - OpenSSH per-connection server daemon (10.0.0.1:37048). Apr 16 04:13:17.653667 kubelet[2980]: E0416 04:13:17.650464 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:18.631524 kubelet[2980]: E0416 04:13:18.608016 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:13:19.285418 sshd[6239]: Accepted publickey for core from 10.0.0.1 port 37048 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:13:19.286985 sshd-session[6239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:13:19.537974 systemd-logind[1549]: New session 77 of user core. Apr 16 04:13:19.594963 kubelet[2980]: E0416 04:13:19.591413 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:19.599367 systemd[1]: Started session-77.scope - Session 77 of User core. Apr 16 04:13:20.132760 kubelet[2980]: E0416 04:13:20.132047 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:13:21.894830 kubelet[2980]: E0416 04:13:21.888315 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:23.187066 sshd[6242]: Connection closed by 10.0.0.1 port 37048 Apr 16 04:13:23.221975 sshd-session[6239]: pam_unix(sshd:session): session closed for user core Apr 16 04:13:23.286172 systemd-logind[1549]: Session 77 logged out. Waiting for processes to exit. Apr 16 04:13:23.309067 systemd[1]: sshd@76-10.0.0.115:22-10.0.0.1:37048.service: Deactivated successfully. Apr 16 04:13:23.500641 systemd[1]: session-77.scope: Deactivated successfully. Apr 16 04:13:23.533856 systemd[1]: session-77.scope: Consumed 1.360s CPU time, 16.5M memory peak. Apr 16 04:13:23.573211 kubelet[2980]: E0416 04:13:23.572923 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:23.575703 systemd-logind[1549]: Removed session 77. Apr 16 04:13:25.250069 kubelet[2980]: E0416 04:13:25.236078 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:25.394176 kubelet[2980]: E0416 04:13:25.372858 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:13:26.597287 kubelet[2980]: E0416 04:13:26.594601 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:28.636151 kubelet[2980]: E0416 04:13:28.632145 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:29.044203 systemd[1]: Started sshd@77-10.0.0.115:22-10.0.0.1:40938.service - OpenSSH per-connection server daemon (10.0.0.1:40938). Apr 16 04:13:30.525407 kubelet[2980]: E0416 04:13:30.521798 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:13:30.738029 kubelet[2980]: E0416 04:13:30.722521 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:32.366279 sshd[6260]: Accepted publickey for core from 10.0.0.1 port 40938 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:13:32.396501 sshd-session[6260]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:13:32.883393 kubelet[2980]: E0416 04:13:32.805673 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:33.133875 systemd-logind[1549]: New session 78 of user core. Apr 16 04:13:33.252573 systemd[1]: Started session-78.scope - Session 78 of User core. Apr 16 04:13:34.598838 kubelet[2980]: E0416 04:13:34.596317 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:35.544485 kubelet[2980]: E0416 04:13:35.543648 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:13:36.127780 containerd[1575]: time="2026-04-16T04:13:36.100269798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:13:36.127780 containerd[1575]: time="2026-04-16T04:13:36.201658389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 16 04:13:36.653961 containerd[1575]: time="2026-04-16T04:13:36.651293410Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:13:36.716580 containerd[1575]: time="2026-04-16T04:13:36.713411295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:13:36.749540 containerd[1575]: time="2026-04-16T04:13:36.749237994Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 1m54.996962118s" Apr 16 04:13:36.749540 containerd[1575]: time="2026-04-16T04:13:36.749335848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 16 04:13:36.755049 kubelet[2980]: E0416 04:13:36.748481 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:37.191834 sshd[6264]: Connection closed by 10.0.0.1 port 40938 Apr 16 04:13:37.270228 containerd[1575]: time="2026-04-16T04:13:37.177464074Z" level=info msg="CreateContainer within sandbox \"d274b66e3a7d173ca74ea653fd497e7a000bb57082609d9eaa03b24e2dc7ade0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 16 04:13:37.238903 sshd-session[6260]: pam_unix(sshd:session): session closed for user core Apr 16 04:13:37.490867 systemd[1]: sshd@77-10.0.0.115:22-10.0.0.1:40938.service: Deactivated successfully. Apr 16 04:13:37.553599 systemd[1]: session-78.scope: Deactivated successfully. Apr 16 04:13:37.558039 systemd[1]: session-78.scope: Consumed 1.781s CPU time, 16.8M memory peak. Apr 16 04:13:37.588982 systemd-logind[1549]: Session 78 logged out. Waiting for processes to exit. Apr 16 04:13:37.678571 systemd-logind[1549]: Removed session 78. Apr 16 04:13:37.779212 containerd[1575]: time="2026-04-16T04:13:37.772486384Z" level=info msg="Container 30dc5e828fdc9d0903bc8b8999a428e8b48bdd1b16a5248c764c1a2b875de7e8: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:13:38.392122 containerd[1575]: time="2026-04-16T04:13:38.390487487Z" level=info msg="CreateContainer within sandbox \"d274b66e3a7d173ca74ea653fd497e7a000bb57082609d9eaa03b24e2dc7ade0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"30dc5e828fdc9d0903bc8b8999a428e8b48bdd1b16a5248c764c1a2b875de7e8\"" Apr 16 04:13:38.431320 containerd[1575]: time="2026-04-16T04:13:38.422702566Z" level=info msg="StartContainer for \"30dc5e828fdc9d0903bc8b8999a428e8b48bdd1b16a5248c764c1a2b875de7e8\"" Apr 16 04:13:38.442962 containerd[1575]: time="2026-04-16T04:13:38.442554489Z" level=info msg="connecting to shim 30dc5e828fdc9d0903bc8b8999a428e8b48bdd1b16a5248c764c1a2b875de7e8" address="unix:///run/containerd/s/aeabc3715f557963c617c8591f62e432aca8901fc2a59ed43a1f9f47d5f9452d" protocol=ttrpc version=3 Apr 16 04:13:38.724523 kubelet[2980]: E0416 04:13:38.724468 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:38.735849 kubelet[2980]: I0416 04:13:38.735690 2980 scope.go:122] "RemoveContainer" containerID="23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b" Apr 16 04:13:38.737558 kubelet[2980]: E0416 04:13:38.737521 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-mwc4j_tigera-operator(1fd5a14c-9f90-43e3-abf1-9685462b990b)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-mwc4j" podUID="1fd5a14c-9f90-43e3-abf1-9685462b990b" Apr 16 04:13:39.466143 systemd[1]: Started cri-containerd-30dc5e828fdc9d0903bc8b8999a428e8b48bdd1b16a5248c764c1a2b875de7e8.scope - libcontainer container 30dc5e828fdc9d0903bc8b8999a428e8b48bdd1b16a5248c764c1a2b875de7e8. Apr 16 04:13:40.584392 kubelet[2980]: E0416 04:13:40.574918 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:13:40.584392 kubelet[2980]: E0416 04:13:40.575070 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:41.715211 containerd[1575]: time="2026-04-16T04:13:41.708837273Z" level=error msg="get state for 30dc5e828fdc9d0903bc8b8999a428e8b48bdd1b16a5248c764c1a2b875de7e8" error="context deadline exceeded" Apr 16 04:13:41.715211 containerd[1575]: time="2026-04-16T04:13:41.709852089Z" level=warning msg="unknown status" status=0 Apr 16 04:13:42.568551 systemd[1]: Started sshd@78-10.0.0.115:22-10.0.0.1:55688.service - OpenSSH per-connection server daemon (10.0.0.1:55688). Apr 16 04:13:42.603955 kubelet[2980]: E0416 04:13:42.603894 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:43.192354 containerd[1575]: time="2026-04-16T04:13:43.191772582Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 04:13:43.534360 containerd[1575]: time="2026-04-16T04:13:43.519847891Z" level=info msg="StartContainer for \"30dc5e828fdc9d0903bc8b8999a428e8b48bdd1b16a5248c764c1a2b875de7e8\" returns successfully" Apr 16 04:13:44.162261 sshd[6304]: Accepted publickey for core from 10.0.0.1 port 55688 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:13:44.262635 sshd-session[6304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:13:44.558342 systemd-logind[1549]: New session 79 of user core. Apr 16 04:13:44.679207 systemd[1]: Started session-79.scope - Session 79 of User core. Apr 16 04:13:44.702387 kubelet[2980]: E0416 04:13:44.702342 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:45.663009 kubelet[2980]: E0416 04:13:45.662759 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:13:46.315020 sshd[6319]: Connection closed by 10.0.0.1 port 55688 Apr 16 04:13:46.349884 sshd-session[6304]: pam_unix(sshd:session): session closed for user core Apr 16 04:13:46.599464 kubelet[2980]: E0416 04:13:46.597739 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:46.599561 systemd[1]: sshd@78-10.0.0.115:22-10.0.0.1:55688.service: Deactivated successfully. Apr 16 04:13:46.828929 systemd[1]: session-79.scope: Deactivated successfully. Apr 16 04:13:46.857822 systemd-logind[1549]: Session 79 logged out. Waiting for processes to exit. Apr 16 04:13:46.895643 systemd-logind[1549]: Removed session 79. Apr 16 04:13:48.592246 kubelet[2980]: E0416 04:13:48.577798 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:50.640844 kubelet[2980]: E0416 04:13:50.638845 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:50.847061 kubelet[2980]: E0416 04:13:50.791506 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:13:52.024717 systemd[1]: Started sshd@79-10.0.0.115:22-10.0.0.1:33390.service - OpenSSH per-connection server daemon (10.0.0.1:33390). Apr 16 04:13:53.037974 kubelet[2980]: E0416 04:13:53.037912 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:53.627290 sshd[6336]: Accepted publickey for core from 10.0.0.1 port 33390 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:13:53.626331 sshd-session[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:13:53.760615 systemd-logind[1549]: New session 80 of user core. Apr 16 04:13:53.766478 systemd[1]: Started session-80.scope - Session 80 of User core. Apr 16 04:13:54.605440 kubelet[2980]: E0416 04:13:54.599294 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:55.575396 kubelet[2980]: I0416 04:13:55.569643 2980 scope.go:122] "RemoveContainer" containerID="1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090" Apr 16 04:13:55.594919 kubelet[2980]: E0416 04:13:55.577593 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:13:55.594919 kubelet[2980]: E0416 04:13:55.593816 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 04:13:55.848456 kubelet[2980]: E0416 04:13:55.844464 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:13:56.371313 sshd[6339]: Connection closed by 10.0.0.1 port 33390 Apr 16 04:13:56.374959 sshd-session[6336]: pam_unix(sshd:session): session closed for user core Apr 16 04:13:56.568821 systemd[1]: sshd@79-10.0.0.115:22-10.0.0.1:33390.service: Deactivated successfully. Apr 16 04:13:56.607058 kubelet[2980]: E0416 04:13:56.606403 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:56.689016 systemd[1]: session-80.scope: Deactivated successfully. Apr 16 04:13:56.875717 systemd-logind[1549]: Session 80 logged out. Waiting for processes to exit. Apr 16 04:13:56.953961 systemd[1]: Started sshd@80-10.0.0.115:22-10.0.0.1:38262.service - OpenSSH per-connection server daemon (10.0.0.1:38262). Apr 16 04:13:56.993889 systemd-logind[1549]: Removed session 80. Apr 16 04:13:57.926165 sshd[6353]: Accepted publickey for core from 10.0.0.1 port 38262 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:13:57.930325 sshd-session[6353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:13:58.603937 kubelet[2980]: E0416 04:13:58.603283 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:13:58.753306 systemd-logind[1549]: New session 81 of user core. Apr 16 04:13:58.970909 systemd[1]: Started session-81.scope - Session 81 of User core. Apr 16 04:14:00.648240 kubelet[2980]: E0416 04:14:00.645545 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:00.860624 kubelet[2980]: E0416 04:14:00.860077 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:14:02.652175 kubelet[2980]: E0416 04:14:02.648709 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:04.612686 kubelet[2980]: E0416 04:14:04.598331 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:05.928582 kubelet[2980]: E0416 04:14:05.922652 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:14:07.000846 kubelet[2980]: E0416 04:14:06.993518 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:07.555217 sshd[6356]: Connection closed by 10.0.0.1 port 38262 Apr 16 04:14:07.567771 sshd-session[6353]: pam_unix(sshd:session): session closed for user core Apr 16 04:14:08.957749 kubelet[2980]: E0416 04:14:08.764040 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:08.962882 systemd[1]: Started sshd@81-10.0.0.115:22-10.0.0.1:60562.service - OpenSSH per-connection server daemon (10.0.0.1:60562). Apr 16 04:14:09.327668 systemd[1]: sshd@80-10.0.0.115:22-10.0.0.1:38262.service: Deactivated successfully. Apr 16 04:14:09.848893 systemd[1]: session-81.scope: Deactivated successfully. Apr 16 04:14:09.971468 systemd[1]: session-81.scope: Consumed 3.593s CPU time, 52.4M memory peak. Apr 16 04:14:10.177732 systemd-logind[1549]: Session 81 logged out. Waiting for processes to exit. Apr 16 04:14:10.884603 systemd-logind[1549]: Removed session 81. Apr 16 04:14:11.537607 kubelet[2980]: E0416 04:14:11.535503 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:14:11.741315 kubelet[2980]: E0416 04:14:11.738722 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.168s" Apr 16 04:14:11.795608 kubelet[2980]: E0416 04:14:11.766865 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:13.631472 kubelet[2980]: E0416 04:14:13.618165 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:14.654463 sshd[6366]: Accepted publickey for core from 10.0.0.1 port 60562 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:14:15.213578 sshd-session[6366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:14:15.680588 kubelet[2980]: E0416 04:14:15.597191 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:16.758908 kubelet[2980]: E0416 04:14:16.758306 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:14:16.885998 systemd-logind[1549]: New session 82 of user core. Apr 16 04:14:17.112069 systemd[1]: Started session-82.scope - Session 82 of User core. Apr 16 04:14:17.583813 kubelet[2980]: E0416 04:14:17.581966 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:20.982867 kubelet[2980]: E0416 04:14:20.951580 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.373s" Apr 16 04:14:22.237800 kubelet[2980]: E0416 04:14:22.099990 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:14:22.653361 kubelet[2980]: E0416 04:14:22.422449 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.469s" Apr 16 04:14:23.388838 kubelet[2980]: E0416 04:14:23.388515 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:14:23.929157 kubelet[2980]: E0416 04:14:23.388885 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:24.625071 kubelet[2980]: E0416 04:14:24.607769 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.904s" Apr 16 04:14:27.825752 kubelet[2980]: E0416 04:14:27.821971 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:14:27.871163 kubelet[2980]: E0416 04:14:27.857625 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.203s" Apr 16 04:14:28.143367 kubelet[2980]: E0416 04:14:28.078014 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:30.466962 kubelet[2980]: E0416 04:14:30.453445 2980 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 16 04:14:30.646472 kubelet[2980]: E0416 04:14:30.453660 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.477s" Apr 16 04:14:33.054780 kubelet[2980]: E0416 04:14:33.045912 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.57s" Apr 16 04:14:33.467908 kubelet[2980]: E0416 04:14:33.170634 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:14:34.085877 kubelet[2980]: E0416 04:14:34.084955 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:34.696620 kubelet[2980]: E0416 04:14:34.696218 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:14:35.606958 kubelet[2980]: E0416 04:14:35.601474 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:14:35.646985 kubelet[2980]: E0416 04:14:35.616124 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:35.644202 systemd[1]: cri-containerd-0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412.scope: Deactivated successfully. Apr 16 04:14:35.699055 systemd[1]: cri-containerd-0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412.scope: Consumed 26.520s CPU time, 23.2M memory peak, 752K read from disk. Apr 16 04:14:35.820184 containerd[1575]: time="2026-04-16T04:14:35.699696105Z" level=info msg="received container exit event container_id:\"0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412\" id:\"0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412\" pid:6113 exit_status:1 exited_at:{seconds:1776312875 nanos:695218036}" Apr 16 04:14:37.357897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412-rootfs.mount: Deactivated successfully. Apr 16 04:14:37.571282 kubelet[2980]: E0416 04:14:37.568904 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:38.228425 kubelet[2980]: E0416 04:14:38.226651 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:14:38.530752 kubelet[2980]: I0416 04:14:38.524725 2980 scope.go:122] "RemoveContainer" containerID="461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142" Apr 16 04:14:38.595215 kubelet[2980]: I0416 04:14:38.582010 2980 scope.go:122] "RemoveContainer" containerID="0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412" Apr 16 04:14:38.688252 kubelet[2980]: E0416 04:14:38.596637 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:14:38.702118 kubelet[2980]: E0416 04:14:38.701156 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 16 04:14:38.791845 containerd[1575]: time="2026-04-16T04:14:38.766080207Z" level=info msg="RemoveContainer for \"461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142\"" Apr 16 04:14:39.080075 containerd[1575]: time="2026-04-16T04:14:39.078563164Z" level=info msg="RemoveContainer for \"461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142\" returns successfully" Apr 16 04:14:39.685194 kubelet[2980]: E0416 04:14:39.684624 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:40.117785 kubelet[2980]: I0416 04:14:40.095915 2980 scope.go:122] "RemoveContainer" containerID="0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412" Apr 16 04:14:40.117785 kubelet[2980]: E0416 04:14:40.096198 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:14:40.123165 kubelet[2980]: E0416 04:14:40.111180 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 16 04:14:41.590977 kubelet[2980]: E0416 04:14:41.589213 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:43.316944 kubelet[2980]: E0416 04:14:43.313555 2980 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Apr 16 04:14:43.587526 kubelet[2980]: E0416 04:14:43.583515 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:45.650283 kubelet[2980]: E0416 04:14:45.635474 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:46.280912 systemd[1]: cri-containerd-30dc5e828fdc9d0903bc8b8999a428e8b48bdd1b16a5248c764c1a2b875de7e8.scope: Deactivated successfully. Apr 16 04:14:46.281544 systemd[1]: cri-containerd-30dc5e828fdc9d0903bc8b8999a428e8b48bdd1b16a5248c764c1a2b875de7e8.scope: Consumed 13.673s CPU time, 175.5M memory peak, 2.5M read from disk, 177M written to disk. Apr 16 04:14:46.351374 containerd[1575]: time="2026-04-16T04:14:46.350945395Z" level=info msg="received container exit event container_id:\"30dc5e828fdc9d0903bc8b8999a428e8b48bdd1b16a5248c764c1a2b875de7e8\" id:\"30dc5e828fdc9d0903bc8b8999a428e8b48bdd1b16a5248c764c1a2b875de7e8\" pid:6297 exited_at:{seconds:1776312886 nanos:349613539}" Apr 16 04:14:46.382453 sshd[6375]: Connection closed by 10.0.0.1 port 60562 Apr 16 04:14:46.399556 sshd-session[6366]: pam_unix(sshd:session): session closed for user core Apr 16 04:14:46.677857 systemd[1]: sshd@81-10.0.0.115:22-10.0.0.1:60562.service: Deactivated successfully. Apr 16 04:14:46.679764 systemd[1]: sshd@81-10.0.0.115:22-10.0.0.1:60562.service: Consumed 1.767s CPU time, 4.1M memory peak. Apr 16 04:14:46.684638 systemd[1]: session-82.scope: Deactivated successfully. Apr 16 04:14:46.685280 systemd[1]: session-82.scope: Consumed 11.861s CPU time, 48.6M memory peak. Apr 16 04:14:46.705816 systemd-logind[1549]: Session 82 logged out. Waiting for processes to exit. Apr 16 04:14:46.739861 systemd[1]: Started sshd@82-10.0.0.115:22-10.0.0.1:50436.service - OpenSSH per-connection server daemon (10.0.0.1:50436). Apr 16 04:14:46.765227 systemd-logind[1549]: Removed session 82. Apr 16 04:14:46.920602 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30dc5e828fdc9d0903bc8b8999a428e8b48bdd1b16a5248c764c1a2b875de7e8-rootfs.mount: Deactivated successfully. Apr 16 04:14:47.572578 kubelet[2980]: E0416 04:14:47.567324 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:48.106866 sshd[6423]: Accepted publickey for core from 10.0.0.1 port 50436 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:14:48.299746 sshd-session[6423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:14:48.370316 containerd[1575]: time="2026-04-16T04:14:48.352322798Z" level=info msg="CreateContainer within sandbox \"d274b66e3a7d173ca74ea653fd497e7a000bb57082609d9eaa03b24e2dc7ade0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 16 04:14:48.667698 systemd-logind[1549]: New session 83 of user core. Apr 16 04:14:48.787044 kubelet[2980]: I0416 04:14:48.786539 2980 scope.go:122] "RemoveContainer" containerID="23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b" Apr 16 04:14:48.813656 kubelet[2980]: E0416 04:14:48.787989 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-mwc4j_tigera-operator(1fd5a14c-9f90-43e3-abf1-9685462b990b)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-mwc4j" podUID="1fd5a14c-9f90-43e3-abf1-9685462b990b" Apr 16 04:14:48.815183 containerd[1575]: time="2026-04-16T04:14:48.804334367Z" level=info msg="Container 833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:14:48.958652 systemd[1]: Started session-83.scope - Session 83 of User core. Apr 16 04:14:48.986640 containerd[1575]: time="2026-04-16T04:14:48.986540636Z" level=info msg="CreateContainer within sandbox \"d274b66e3a7d173ca74ea653fd497e7a000bb57082609d9eaa03b24e2dc7ade0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254\"" Apr 16 04:14:48.998811 containerd[1575]: time="2026-04-16T04:14:48.998731675Z" level=info msg="StartContainer for \"833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254\"" Apr 16 04:14:49.012387 containerd[1575]: time="2026-04-16T04:14:49.012233480Z" level=info msg="connecting to shim 833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254" address="unix:///run/containerd/s/aeabc3715f557963c617c8591f62e432aca8901fc2a59ed43a1f9f47d5f9452d" protocol=ttrpc version=3 Apr 16 04:14:49.293711 systemd[1]: Created slice kubepods-besteffort-pod6bb8af70_d3bd_4282_a3de_bea0ffd9b767.slice - libcontainer container kubepods-besteffort-pod6bb8af70_d3bd_4282_a3de_bea0ffd9b767.slice. Apr 16 04:14:49.499577 containerd[1575]: time="2026-04-16T04:14:49.496813618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r69h8,Uid:6bb8af70-d3bd-4282-a3de-bea0ffd9b767,Namespace:calico-system,Attempt:0,}" Apr 16 04:14:50.054303 systemd[1]: Started cri-containerd-833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254.scope - libcontainer container 833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254. Apr 16 04:14:51.679772 containerd[1575]: time="2026-04-16T04:14:51.679249385Z" level=info msg="StartContainer for \"833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254\" returns successfully" Apr 16 04:14:53.426188 sshd[6430]: Connection closed by 10.0.0.1 port 50436 Apr 16 04:14:53.466062 sshd-session[6423]: pam_unix(sshd:session): session closed for user core Apr 16 04:14:53.707471 kubelet[2980]: I0416 04:14:53.701929 2980 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-kgtx5" podStartSLOduration=16.187854869 podStartE2EDuration="8m53.701908194s" podCreationTimestamp="2026-04-16 04:06:00 +0000 UTC" firstStartedPulling="2026-04-16 04:06:10.116235101 +0000 UTC m=+839.156730095" lastFinishedPulling="2026-04-16 04:14:47.630288434 +0000 UTC m=+1356.670783420" observedRunningTime="2026-04-16 04:14:53.69062447 +0000 UTC m=+1362.731119484" watchObservedRunningTime="2026-04-16 04:14:53.701908194 +0000 UTC m=+1362.742403179" Apr 16 04:14:53.708952 systemd[1]: Started sshd@83-10.0.0.115:22-10.0.0.1:50446.service - OpenSSH per-connection server daemon (10.0.0.1:50446). Apr 16 04:14:53.709852 systemd[1]: sshd@82-10.0.0.115:22-10.0.0.1:50436.service: Deactivated successfully. Apr 16 04:14:53.730873 systemd[1]: session-83.scope: Deactivated successfully. Apr 16 04:14:53.732382 systemd[1]: session-83.scope: Consumed 1.708s CPU time, 30.1M memory peak. Apr 16 04:14:53.803178 systemd-logind[1549]: Session 83 logged out. Waiting for processes to exit. Apr 16 04:14:53.844596 systemd-logind[1549]: Removed session 83. Apr 16 04:14:54.077597 containerd[1575]: time="2026-04-16T04:14:54.076182281Z" level=error msg="Failed to destroy network for sandbox \"389197924ee4e56dde20791e844dc2911c487286567d54ebbb567c0087c62741\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:14:54.176229 systemd[1]: run-netns-cni\x2d6033a6e0\x2da7fb\x2dc188\x2df433\x2da3e45bd132da.mount: Deactivated successfully. Apr 16 04:14:54.255627 containerd[1575]: time="2026-04-16T04:14:54.255446035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r69h8,Uid:6bb8af70-d3bd-4282-a3de-bea0ffd9b767,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"389197924ee4e56dde20791e844dc2911c487286567d54ebbb567c0087c62741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:14:54.278939 kubelet[2980]: E0416 04:14:54.275958 2980 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"389197924ee4e56dde20791e844dc2911c487286567d54ebbb567c0087c62741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 16 04:14:54.291622 kubelet[2980]: E0416 04:14:54.283429 2980 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"389197924ee4e56dde20791e844dc2911c487286567d54ebbb567c0087c62741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r69h8" Apr 16 04:14:54.291622 kubelet[2980]: E0416 04:14:54.283782 2980 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"389197924ee4e56dde20791e844dc2911c487286567d54ebbb567c0087c62741\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r69h8" Apr 16 04:14:54.294135 kubelet[2980]: E0416 04:14:54.284061 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r69h8_calico-system(6bb8af70-d3bd-4282-a3de-bea0ffd9b767)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r69h8_calico-system(6bb8af70-d3bd-4282-a3de-bea0ffd9b767)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"389197924ee4e56dde20791e844dc2911c487286567d54ebbb567c0087c62741\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r69h8" podUID="6bb8af70-d3bd-4282-a3de-bea0ffd9b767" Apr 16 04:14:54.815077 sshd[6504]: Accepted publickey for core from 10.0.0.1 port 50446 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:14:54.846262 sshd-session[6504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:14:55.199733 systemd-logind[1549]: New session 84 of user core. Apr 16 04:14:55.386652 systemd[1]: Started session-84.scope - Session 84 of User core. Apr 16 04:14:57.938653 sshd[6534]: Connection closed by 10.0.0.1 port 50446 Apr 16 04:14:58.190363 sshd-session[6504]: pam_unix(sshd:session): session closed for user core Apr 16 04:14:58.857699 systemd[1]: sshd@83-10.0.0.115:22-10.0.0.1:50446.service: Deactivated successfully. Apr 16 04:14:59.150886 systemd[1]: session-84.scope: Deactivated successfully. Apr 16 04:14:59.156730 systemd[1]: session-84.scope: Consumed 1.225s CPU time, 15M memory peak. Apr 16 04:14:59.253984 systemd-logind[1549]: Session 84 logged out. Waiting for processes to exit. Apr 16 04:14:59.673748 systemd-logind[1549]: Removed session 84. Apr 16 04:15:03.573900 systemd[1]: Started sshd@84-10.0.0.115:22-10.0.0.1:33548.service - OpenSSH per-connection server daemon (10.0.0.1:33548). Apr 16 04:15:04.080353 containerd[1575]: time="2026-04-16T04:15:04.071043452Z" level=warning msg="container event discarded" container=23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b type=CONTAINER_CREATED_EVENT Apr 16 04:15:04.291902 containerd[1575]: time="2026-04-16T04:15:04.085354121Z" level=warning msg="container event discarded" container=1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090 type=CONTAINER_CREATED_EVENT Apr 16 04:15:05.708650 containerd[1575]: time="2026-04-16T04:15:05.707404911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r69h8,Uid:6bb8af70-d3bd-4282-a3de-bea0ffd9b767,Namespace:calico-system,Attempt:0,}" Apr 16 04:15:06.046988 sshd[6577]: Accepted publickey for core from 10.0.0.1 port 33548 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:15:06.245542 sshd-session[6577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:15:07.320740 systemd-logind[1549]: New session 85 of user core. Apr 16 04:15:07.337645 systemd[1]: Started session-85.scope - Session 85 of User core. Apr 16 04:15:14.995863 kubelet[2980]: E0416 04:15:14.993181 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.425s" Apr 16 04:15:20.060472 kubelet[2980]: E0416 04:15:19.942432 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.886s" Apr 16 04:15:22.165272 containerd[1575]: time="2026-04-16T04:15:22.164672765Z" level=warning msg="container event discarded" container=23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b type=CONTAINER_STARTED_EVENT Apr 16 04:15:22.675537 containerd[1575]: time="2026-04-16T04:15:22.355747544Z" level=warning msg="container event discarded" container=1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090 type=CONTAINER_STARTED_EVENT Apr 16 04:15:23.764525 kubelet[2980]: E0416 04:15:23.756043 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.581s" Apr 16 04:15:24.265039 kubelet[2980]: I0416 04:15:23.779289 2980 scope.go:122] "RemoveContainer" containerID="1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090" Apr 16 04:15:24.265039 kubelet[2980]: E0416 04:15:23.855289 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:15:24.265039 kubelet[2980]: E0416 04:15:23.995657 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 04:15:25.264614 kubelet[2980]: E0416 04:15:25.209999 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.413s" Apr 16 04:15:25.370166 sshd[6600]: Connection closed by 10.0.0.1 port 33548 Apr 16 04:15:25.623436 sshd-session[6577]: pam_unix(sshd:session): session closed for user core Apr 16 04:15:27.722746 systemd[1]: sshd@84-10.0.0.115:22-10.0.0.1:33548.service: Deactivated successfully. Apr 16 04:15:27.927403 systemd[1]: sshd@84-10.0.0.115:22-10.0.0.1:33548.service: Consumed 1.200s CPU time, 3.9M memory peak. Apr 16 04:15:28.893847 systemd[1]: session-85.scope: Deactivated successfully. Apr 16 04:15:29.116370 systemd[1]: session-85.scope: Consumed 8.392s CPU time, 16.2M memory peak. Apr 16 04:15:30.572822 systemd-logind[1549]: Session 85 logged out. Waiting for processes to exit. Apr 16 04:15:32.904972 systemd[1]: Started sshd@85-10.0.0.115:22-10.0.0.1:42436.service - OpenSSH per-connection server daemon (10.0.0.1:42436). Apr 16 04:15:32.957043 systemd-logind[1549]: Removed session 85. Apr 16 04:15:33.663174 kubelet[2980]: E0416 04:15:33.659773 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.266s" Apr 16 04:15:35.048319 kubelet[2980]: E0416 04:15:35.046061 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.383s" Apr 16 04:15:43.482661 kubelet[2980]: E0416 04:15:43.459454 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.871s" Apr 16 04:15:46.683755 containerd[1575]: time="2026-04-16T04:15:46.660504287Z" level=warning msg="container event discarded" container=6122b5ae811dde545c074b08a9209c5ee7e62383a1a0ab13a1ae10eb70d64117 type=CONTAINER_CREATED_EVENT Apr 16 04:15:47.192040 kubelet[2980]: E0416 04:15:47.065135 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.283s" Apr 16 04:15:48.168077 containerd[1575]: time="2026-04-16T04:15:48.165628741Z" level=warning msg="container event discarded" container=6122b5ae811dde545c074b08a9209c5ee7e62383a1a0ab13a1ae10eb70d64117 type=CONTAINER_STARTED_EVENT Apr 16 04:15:48.592686 kubelet[2980]: E0416 04:15:48.548251 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.297s" Apr 16 04:15:48.592686 kubelet[2980]: E0416 04:15:48.566764 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:15:48.722411 kubelet[2980]: E0416 04:15:48.641330 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:15:49.582265 sshd[6624]: Accepted publickey for core from 10.0.0.1 port 42436 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:15:49.670319 sshd-session[6624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:15:49.983639 systemd-logind[1549]: New session 86 of user core. Apr 16 04:15:50.030914 systemd[1]: Started session-86.scope - Session 86 of User core. Apr 16 04:15:52.345152 kubelet[2980]: E0416 04:15:52.344866 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.775s" Apr 16 04:15:56.663952 kubelet[2980]: E0416 04:15:56.655060 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.776s" Apr 16 04:16:01.774235 kubelet[2980]: E0416 04:16:01.772686 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.069s" Apr 16 04:16:01.954733 kubelet[2980]: I0416 04:16:01.954049 2980 scope.go:122] "RemoveContainer" containerID="0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412" Apr 16 04:16:01.967593 kubelet[2980]: E0416 04:16:01.966842 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:16:02.291768 kubelet[2980]: I0416 04:16:02.290208 2980 scope.go:122] "RemoveContainer" containerID="23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b" Apr 16 04:16:02.862764 containerd[1575]: time="2026-04-16T04:16:02.862177605Z" level=info msg="CreateContainer within sandbox \"b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:5,}" Apr 16 04:16:03.233890 sshd[6652]: Connection closed by 10.0.0.1 port 42436 Apr 16 04:16:03.235878 sshd-session[6624]: pam_unix(sshd:session): session closed for user core Apr 16 04:16:03.580006 containerd[1575]: time="2026-04-16T04:16:03.459870047Z" level=error msg="ExecSync for \"833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 16 04:16:03.677342 containerd[1575]: time="2026-04-16T04:16:03.674778989Z" level=info msg="CreateContainer within sandbox \"c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db\" for container &ContainerMetadata{Name:tigera-operator,Attempt:7,}" Apr 16 04:16:03.749911 kubelet[2980]: E0416 04:16:03.742497 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:16:03.865065 systemd[1]: sshd@85-10.0.0.115:22-10.0.0.1:42436.service: Deactivated successfully. Apr 16 04:16:03.993280 systemd[1]: sshd@85-10.0.0.115:22-10.0.0.1:42436.service: Consumed 3.968s CPU time, 3.2M memory peak. Apr 16 04:16:04.052491 kubelet[2980]: E0416 04:16:04.049865 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.362s" Apr 16 04:16:04.622948 kubelet[2980]: I0416 04:16:04.622875 2980 scope.go:122] "RemoveContainer" containerID="1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090" Apr 16 04:16:04.783388 kubelet[2980]: E0416 04:16:04.647073 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:16:05.153715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2538970562.mount: Deactivated successfully. Apr 16 04:16:05.516643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount859168919.mount: Deactivated successfully. Apr 16 04:16:05.676809 containerd[1575]: time="2026-04-16T04:16:05.673213692Z" level=info msg="Container ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:16:05.718968 systemd[1]: session-86.scope: Deactivated successfully. Apr 16 04:16:05.725802 systemd[1]: session-86.scope: Consumed 5.484s CPU time, 18.4M memory peak. Apr 16 04:16:05.995289 containerd[1575]: time="2026-04-16T04:16:05.993043814Z" level=info msg="CreateContainer within sandbox \"02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:7,}" Apr 16 04:16:06.021319 systemd-logind[1549]: Session 86 logged out. Waiting for processes to exit. Apr 16 04:16:06.222232 kubelet[2980]: E0416 04:16:06.218029 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.602s" Apr 16 04:16:06.546939 systemd-logind[1549]: Removed session 86. Apr 16 04:16:07.763952 containerd[1575]: time="2026-04-16T04:16:07.763462204Z" level=info msg="CreateContainer within sandbox \"b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:5,} returns container id \"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\"" Apr 16 04:16:07.951702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2589667851.mount: Deactivated successfully. Apr 16 04:16:08.000356 containerd[1575]: time="2026-04-16T04:16:07.966494934Z" level=info msg="StartContainer for \"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\"" Apr 16 04:16:08.178593 containerd[1575]: time="2026-04-16T04:16:08.176980728Z" level=info msg="connecting to shim ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778" address="unix:///run/containerd/s/64fc1d346c666396b4a6f4eda52f8f58d8abeacdc8da519fac54d1b45f3029a3" protocol=ttrpc version=3 Apr 16 04:16:08.752455 containerd[1575]: time="2026-04-16T04:16:08.750878882Z" level=info msg="Container a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:16:09.055249 systemd[1]: Started sshd@86-10.0.0.115:22-10.0.0.1:33564.service - OpenSSH per-connection server daemon (10.0.0.1:33564). Apr 16 04:16:10.350635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3180591381.mount: Deactivated successfully. Apr 16 04:16:12.309118 kubelet[2980]: E0416 04:16:12.299055 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.639s" Apr 16 04:16:12.694524 containerd[1575]: time="2026-04-16T04:16:12.620937457Z" level=info msg="Container 37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:16:13.763470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1191010217.mount: Deactivated successfully. Apr 16 04:16:15.242878 containerd[1575]: time="2026-04-16T04:16:14.976579360Z" level=info msg="CreateContainer within sandbox \"c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db\" for &ContainerMetadata{Name:tigera-operator,Attempt:7,} returns container id \"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\"" Apr 16 04:16:16.770875 containerd[1575]: time="2026-04-16T04:16:16.770436757Z" level=info msg="StartContainer for \"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\"" Apr 16 04:16:17.405511 containerd[1575]: time="2026-04-16T04:16:17.005885114Z" level=warning msg="container event discarded" container=23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b type=CONTAINER_STOPPED_EVENT Apr 16 04:16:18.892410 containerd[1575]: time="2026-04-16T04:16:18.785758954Z" level=info msg="connecting to shim a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9" address="unix:///run/containerd/s/b40817c4c3b3e5498badbc035a393ccaaa43aaaa06e8111d2e4d4485037a2b06" protocol=ttrpc version=3 Apr 16 04:16:20.588244 sshd[6683]: Accepted publickey for core from 10.0.0.1 port 33564 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:16:20.624594 sshd-session[6683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:16:20.628299 kubelet[2980]: E0416 04:16:20.624635 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.041s" Apr 16 04:16:20.671616 containerd[1575]: time="2026-04-16T04:16:20.671533462Z" level=info msg="CreateContainer within sandbox \"02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:7,} returns container id \"37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6\"" Apr 16 04:16:20.833941 containerd[1575]: time="2026-04-16T04:16:20.825078130Z" level=info msg="StartContainer for \"37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6\"" Apr 16 04:16:20.941453 systemd-logind[1549]: New session 87 of user core. Apr 16 04:16:21.059966 systemd[1]: Started session-87.scope - Session 87 of User core. Apr 16 04:16:22.489484 containerd[1575]: time="2026-04-16T04:16:22.458975282Z" level=warning msg="container event discarded" container=d94ec84e50bb56055e8b861c3dd77b0499c32c68fc1a71aadbee1a9f555c504b type=CONTAINER_DELETED_EVENT Apr 16 04:16:23.892985 containerd[1575]: time="2026-04-16T04:16:23.375947094Z" level=info msg="connecting to shim 37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6" address="unix:///run/containerd/s/b0f2c5cfffdebf676e7ed85c3328df6a87775c2b04620a5f0b47a494ee449f34" protocol=ttrpc version=3 Apr 16 04:16:25.476355 kubelet[2980]: E0416 04:16:25.475161 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.527s" Apr 16 04:16:28.583859 kubelet[2980]: E0416 04:16:28.569807 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.094s" Apr 16 04:16:32.239051 kubelet[2980]: E0416 04:16:32.238971 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.499s" Apr 16 04:16:32.496419 systemd[1]: Started cri-containerd-a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9.scope - libcontainer container a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9. Apr 16 04:16:32.613374 systemd[1]: Started cri-containerd-ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778.scope - libcontainer container ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778. Apr 16 04:16:32.652849 containerd[1575]: time="2026-04-16T04:16:32.613351250Z" level=warning msg="container event discarded" container=461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142 type=CONTAINER_STOPPED_EVENT Apr 16 04:16:32.741452 sshd[6715]: Connection closed by 10.0.0.1 port 33564 Apr 16 04:16:32.740708 sshd-session[6683]: pam_unix(sshd:session): session closed for user core Apr 16 04:16:32.824482 systemd[1]: sshd@86-10.0.0.115:22-10.0.0.1:33564.service: Deactivated successfully. Apr 16 04:16:32.843134 systemd[1]: sshd@86-10.0.0.115:22-10.0.0.1:33564.service: Consumed 2.596s CPU time, 5.5M memory peak. Apr 16 04:16:32.858371 systemd[1]: session-87.scope: Deactivated successfully. Apr 16 04:16:32.891011 containerd[1575]: time="2026-04-16T04:16:32.886149627Z" level=warning msg="container event discarded" container=8aedace0d4512ea4f17a9ca5ab40f35616d26efad68c5d47b75ec80debff9563 type=CONTAINER_DELETED_EVENT Apr 16 04:16:32.858864 systemd[1]: session-87.scope: Consumed 4.763s CPU time, 18.1M memory peak. Apr 16 04:16:32.896358 systemd-logind[1549]: Session 87 logged out. Waiting for processes to exit. Apr 16 04:16:32.915678 systemd[1]: Started cri-containerd-37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6.scope - libcontainer container 37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6. Apr 16 04:16:32.921741 systemd-logind[1549]: Removed session 87. Apr 16 04:16:33.543902 containerd[1575]: time="2026-04-16T04:16:33.542032754Z" level=warning msg="container event discarded" container=1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090 type=CONTAINER_STOPPED_EVENT Apr 16 04:16:33.732832 containerd[1575]: time="2026-04-16T04:16:33.732691890Z" level=info msg="StartContainer for \"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" returns successfully" Apr 16 04:16:34.088908 containerd[1575]: time="2026-04-16T04:16:34.088650148Z" level=info msg="StartContainer for \"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" returns successfully" Apr 16 04:16:34.488534 containerd[1575]: time="2026-04-16T04:16:34.487082224Z" level=info msg="StartContainer for \"37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6\" returns successfully" Apr 16 04:16:34.758985 kubelet[2980]: E0416 04:16:34.756527 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:16:34.831167 kubelet[2980]: E0416 04:16:34.830528 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:16:35.444449 systemd-networkd[1502]: cali6da2d38aaed: Link UP Apr 16 04:16:35.444672 systemd-networkd[1502]: cali6da2d38aaed: Gained carrier Apr 16 04:16:35.872799 kubelet[2980]: E0416 04:16:35.865634 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:16:36.057563 containerd[1575]: 2026-04-16 04:15:09.908 [ERROR][6582] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 16 04:16:36.057563 containerd[1575]: 2026-04-16 04:15:31.907 [INFO][6582] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--r69h8-eth0 csi-node-driver- calico-system 6bb8af70-d3bd-4282-a3de-bea0ffd9b767 1804 0 2026-04-16 04:06:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-r69h8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6da2d38aaed [] [] }} ContainerID="e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" Namespace="calico-system" Pod="csi-node-driver-r69h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--r69h8-" Apr 16 04:16:36.057563 containerd[1575]: 2026-04-16 04:15:32.140 [INFO][6582] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" Namespace="calico-system" Pod="csi-node-driver-r69h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--r69h8-eth0" Apr 16 04:16:36.057563 containerd[1575]: 2026-04-16 04:16:09.154 [INFO][6634] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" HandleID="k8s-pod-network.e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" Workload="localhost-k8s-csi--node--driver--r69h8-eth0" Apr 16 04:16:36.131934 containerd[1575]: 2026-04-16 04:16:19.887 [INFO][6634] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" HandleID="k8s-pod-network.e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" Workload="localhost-k8s-csi--node--driver--r69h8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f8a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-r69h8", "timestamp":"2026-04-16 04:16:09.154909228 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00023e000)} Apr 16 04:16:36.131934 containerd[1575]: 2026-04-16 04:16:19.893 [INFO][6634] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 16 04:16:36.131934 containerd[1575]: 2026-04-16 04:16:19.894 [INFO][6634] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 16 04:16:36.131934 containerd[1575]: 2026-04-16 04:16:19.901 [INFO][6634] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 16 04:16:36.131934 containerd[1575]: 2026-04-16 04:16:20.680 [INFO][6634] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" host="localhost" Apr 16 04:16:36.131934 containerd[1575]: 2026-04-16 04:16:32.179 [INFO][6634] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 16 04:16:36.131934 containerd[1575]: 2026-04-16 04:16:32.656 [INFO][6634] ipam/ipam.go 558: Ran out of existing affine blocks for host host="localhost" Apr 16 04:16:36.131934 containerd[1575]: 2026-04-16 04:16:32.801 [INFO][6634] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="localhost" Apr 16 04:16:36.131934 containerd[1575]: 2026-04-16 04:16:32.905 [INFO][6634] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.88.128/26 Apr 16 04:16:36.131934 containerd[1575]: 2026-04-16 04:16:32.909 [INFO][6634] ipam/ipam.go 588: Found unclaimed block in 107.730583ms host="localhost" subnet=192.168.88.128/26 Apr 16 04:16:36.134474 containerd[1575]: 2026-04-16 04:16:32.910 [INFO][6634] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="localhost" subnet=192.168.88.128/26 Apr 16 04:16:36.134474 containerd[1575]: 2026-04-16 04:16:33.044 [INFO][6634] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="localhost" subnet=192.168.88.128/26 Apr 16 04:16:36.134474 containerd[1575]: 2026-04-16 04:16:33.066 [INFO][6634] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 16 04:16:36.134474 containerd[1575]: 2026-04-16 04:16:33.116 [INFO][6634] ipam/ipam.go 165: The referenced block doesn't exist, trying to create it cidr=192.168.88.128/26 host="localhost" Apr 16 04:16:36.134474 containerd[1575]: 2026-04-16 04:16:33.281 [INFO][6634] ipam/ipam.go 172: Wrote affinity as pending cidr=192.168.88.128/26 host="localhost" Apr 16 04:16:36.134474 containerd[1575]: 2026-04-16 04:16:33.365 [INFO][6634] ipam/ipam.go 181: Attempting to claim the block cidr=192.168.88.128/26 host="localhost" Apr 16 04:16:36.134474 containerd[1575]: 2026-04-16 04:16:33.369 [INFO][6634] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="localhost" subnet=192.168.88.128/26 Apr 16 04:16:36.134474 containerd[1575]: 2026-04-16 04:16:33.461 [INFO][6634] ipam/ipam_block_reader_writer.go 267: Successfully created block Apr 16 04:16:36.134474 containerd[1575]: 2026-04-16 04:16:33.463 [INFO][6634] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="localhost" subnet=192.168.88.128/26 Apr 16 04:16:36.134474 containerd[1575]: 2026-04-16 04:16:33.807 [INFO][6634] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="localhost" subnet=192.168.88.128/26 Apr 16 04:16:36.134474 containerd[1575]: 2026-04-16 04:16:33.888 [INFO][6634] ipam/ipam.go 623: Block '192.168.88.128/26' has 64 free ips which is more than 1 ips required. host="localhost" subnet=192.168.88.128/26 Apr 16 04:16:36.134474 containerd[1575]: 2026-04-16 04:16:33.927 [INFO][6634] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" host="localhost" Apr 16 04:16:36.134474 containerd[1575]: 2026-04-16 04:16:33.945 [INFO][6634] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec Apr 16 04:16:36.135059 containerd[1575]: 2026-04-16 04:16:34.253 [INFO][6634] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" host="localhost" Apr 16 04:16:36.135059 containerd[1575]: 2026-04-16 04:16:35.028 [INFO][6634] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.128/26] block=192.168.88.128/26 handle="k8s-pod-network.e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" host="localhost" Apr 16 04:16:36.135059 containerd[1575]: 2026-04-16 04:16:35.031 [INFO][6634] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.128/26] handle="k8s-pod-network.e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" host="localhost" Apr 16 04:16:36.135059 containerd[1575]: 2026-04-16 04:16:35.031 [INFO][6634] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 16 04:16:36.135059 containerd[1575]: 2026-04-16 04:16:35.031 [INFO][6634] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.128/26] IPv6=[] ContainerID="e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" HandleID="k8s-pod-network.e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" Workload="localhost-k8s-csi--node--driver--r69h8-eth0" Apr 16 04:16:36.153392 containerd[1575]: 2026-04-16 04:16:35.206 [INFO][6582] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" Namespace="calico-system" Pod="csi-node-driver-r69h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--r69h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--r69h8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bb8af70-d3bd-4282-a3de-bea0ffd9b767", ResourceVersion:"1804", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 4, 6, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-r69h8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6da2d38aaed", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 04:16:36.153713 containerd[1575]: 2026-04-16 04:16:35.210 [INFO][6582] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.128/32] ContainerID="e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" Namespace="calico-system" Pod="csi-node-driver-r69h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--r69h8-eth0" Apr 16 04:16:36.153713 containerd[1575]: 2026-04-16 04:16:35.210 [INFO][6582] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6da2d38aaed ContainerID="e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" Namespace="calico-system" Pod="csi-node-driver-r69h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--r69h8-eth0" Apr 16 04:16:36.153713 containerd[1575]: 2026-04-16 04:16:35.445 [INFO][6582] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" Namespace="calico-system" Pod="csi-node-driver-r69h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--r69h8-eth0" Apr 16 04:16:36.153869 containerd[1575]: 2026-04-16 04:16:35.446 [INFO][6582] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" Namespace="calico-system" Pod="csi-node-driver-r69h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--r69h8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--r69h8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6bb8af70-d3bd-4282-a3de-bea0ffd9b767", ResourceVersion:"1804", Generation:0, CreationTimestamp:time.Date(2026, time.April, 16, 4, 6, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec", Pod:"csi-node-driver-r69h8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6da2d38aaed", MAC:"de:50:b9:d9:48:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 16 04:16:36.153978 containerd[1575]: 2026-04-16 04:16:35.864 [INFO][6582] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" Namespace="calico-system" Pod="csi-node-driver-r69h8" WorkloadEndpoint="localhost-k8s-csi--node--driver--r69h8-eth0" Apr 16 04:16:36.922477 systemd-networkd[1502]: cali6da2d38aaed: Gained IPv6LL Apr 16 04:16:36.954308 kubelet[2980]: E0416 04:16:36.953228 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:16:37.054552 containerd[1575]: time="2026-04-16T04:16:37.049610120Z" level=info msg="connecting to shim e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" address="unix:///run/containerd/s/5f7dd243c3c1bc3f60c2f6f52ee4b4520af0962bfd7e12825b897381f6e09c2f" namespace=k8s.io protocol=ttrpc version=3 Apr 16 04:16:38.406461 systemd[1]: Started cri-containerd-e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec.scope - libcontainer container e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec. Apr 16 04:16:38.437533 systemd[1]: Started sshd@87-10.0.0.115:22-10.0.0.1:40148.service - OpenSSH per-connection server daemon (10.0.0.1:40148). Apr 16 04:16:39.368145 containerd[1575]: time="2026-04-16T04:16:39.362515851Z" level=warning msg="container event discarded" container=6122b5ae811dde545c074b08a9209c5ee7e62383a1a0ab13a1ae10eb70d64117 type=CONTAINER_STOPPED_EVENT Apr 16 04:16:39.689993 containerd[1575]: time="2026-04-16T04:16:39.682344644Z" level=warning msg="container event discarded" container=83ffb475bbf93ece1e4f2fb4e961c6538caf1c9d614eda8b00dba6b57a18b929 type=CONTAINER_DELETED_EVENT Apr 16 04:16:39.735897 sshd[6866]: Accepted publickey for core from 10.0.0.1 port 40148 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:16:39.757482 sshd-session[6866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:16:39.996322 systemd-logind[1549]: New session 88 of user core. Apr 16 04:16:40.048230 systemd[1]: Started session-88.scope - Session 88 of User core. Apr 16 04:16:40.062795 systemd-resolved[1425]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 16 04:16:40.520830 containerd[1575]: time="2026-04-16T04:16:40.515765220Z" level=error msg="get state for e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec" error="context deadline exceeded" Apr 16 04:16:40.524339 containerd[1575]: time="2026-04-16T04:16:40.524295732Z" level=warning msg="unknown status" status=0 Apr 16 04:16:41.251381 kubelet[2980]: E0416 04:16:41.141460 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:16:41.904377 containerd[1575]: time="2026-04-16T04:16:41.903079336Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 04:16:43.993140 containerd[1575]: time="2026-04-16T04:16:43.988403469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r69h8,Uid:6bb8af70-d3bd-4282-a3de-bea0ffd9b767,Namespace:calico-system,Attempt:0,} returns sandbox id \"e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec\"" Apr 16 04:16:44.181538 sshd[6877]: Connection closed by 10.0.0.1 port 40148 Apr 16 04:16:44.185409 sshd-session[6866]: pam_unix(sshd:session): session closed for user core Apr 16 04:16:44.378852 containerd[1575]: time="2026-04-16T04:16:44.356435231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 16 04:16:44.399183 systemd-logind[1549]: Session 88 logged out. Waiting for processes to exit. Apr 16 04:16:44.401546 systemd[1]: sshd@87-10.0.0.115:22-10.0.0.1:40148.service: Deactivated successfully. Apr 16 04:16:44.578082 systemd[1]: session-88.scope: Deactivated successfully. Apr 16 04:16:44.603637 systemd[1]: session-88.scope: Consumed 1.449s CPU time, 16.9M memory peak. Apr 16 04:16:44.965546 systemd-logind[1549]: Removed session 88. Apr 16 04:16:46.399167 kubelet[2980]: E0416 04:16:46.398364 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:16:47.370572 kubelet[2980]: E0416 04:16:47.360982 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:16:49.446165 systemd[1]: Started sshd@88-10.0.0.115:22-10.0.0.1:33076.service - OpenSSH per-connection server daemon (10.0.0.1:33076). Apr 16 04:16:50.416158 sshd[6949]: Accepted publickey for core from 10.0.0.1 port 33076 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:16:50.718050 sshd-session[6949]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:16:51.334037 systemd-logind[1549]: New session 89 of user core. Apr 16 04:16:51.369006 systemd[1]: Started session-89.scope - Session 89 of User core. Apr 16 04:16:51.388321 kubelet[2980]: E0416 04:16:51.387944 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:16:52.997374 containerd[1575]: time="2026-04-16T04:16:52.996330295Z" level=warning msg="container event discarded" container=0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412 type=CONTAINER_CREATED_EVENT Apr 16 04:16:54.880667 sshd[7001]: Connection closed by 10.0.0.1 port 33076 Apr 16 04:16:54.920697 sshd-session[6949]: pam_unix(sshd:session): session closed for user core Apr 16 04:16:55.026745 systemd[1]: sshd@88-10.0.0.115:22-10.0.0.1:33076.service: Deactivated successfully. Apr 16 04:16:55.186312 systemd[1]: session-89.scope: Deactivated successfully. Apr 16 04:16:55.243416 systemd[1]: session-89.scope: Consumed 1.168s CPU time, 17.2M memory peak. Apr 16 04:16:55.281695 systemd-logind[1549]: Session 89 logged out. Waiting for processes to exit. Apr 16 04:16:55.303978 systemd-logind[1549]: Removed session 89. Apr 16 04:16:58.208889 containerd[1575]: time="2026-04-16T04:16:58.182493786Z" level=warning msg="container event discarded" container=0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412 type=CONTAINER_STARTED_EVENT Apr 16 04:16:58.637376 containerd[1575]: time="2026-04-16T04:16:58.635059486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:16:58.640359 containerd[1575]: time="2026-04-16T04:16:58.638898501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 16 04:16:58.657426 containerd[1575]: time="2026-04-16T04:16:58.646028793Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:16:58.770520 containerd[1575]: time="2026-04-16T04:16:58.763591497Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:16:58.829375 containerd[1575]: time="2026-04-16T04:16:58.810545622Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 14.454062536s" Apr 16 04:16:58.865703 containerd[1575]: time="2026-04-16T04:16:58.857341193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 16 04:16:59.224804 containerd[1575]: time="2026-04-16T04:16:59.217167679Z" level=info msg="CreateContainer within sandbox \"e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 16 04:16:59.391286 containerd[1575]: time="2026-04-16T04:16:59.388409086Z" level=error msg="ExecSync for \"833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 16 04:16:59.417247 kubelet[2980]: E0416 04:16:59.393121 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:16:59.532794 containerd[1575]: time="2026-04-16T04:16:59.521014316Z" level=info msg="Container 410bbdcf952bce90ad2ea58741749a5bb26f2d1412713184dfc3c52e0c57ffb8: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:16:59.592218 containerd[1575]: time="2026-04-16T04:16:59.591891367Z" level=info msg="CreateContainer within sandbox \"e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"410bbdcf952bce90ad2ea58741749a5bb26f2d1412713184dfc3c52e0c57ffb8\"" Apr 16 04:16:59.596023 containerd[1575]: time="2026-04-16T04:16:59.594303493Z" level=info msg="StartContainer for \"410bbdcf952bce90ad2ea58741749a5bb26f2d1412713184dfc3c52e0c57ffb8\"" Apr 16 04:16:59.622391 containerd[1575]: time="2026-04-16T04:16:59.621676730Z" level=info msg="connecting to shim 410bbdcf952bce90ad2ea58741749a5bb26f2d1412713184dfc3c52e0c57ffb8" address="unix:///run/containerd/s/5f7dd243c3c1bc3f60c2f6f52ee4b4520af0962bfd7e12825b897381f6e09c2f" protocol=ttrpc version=3 Apr 16 04:16:59.959918 systemd[1]: Started sshd@89-10.0.0.115:22-10.0.0.1:38170.service - OpenSSH per-connection server daemon (10.0.0.1:38170). Apr 16 04:17:00.030785 systemd[1]: Started cri-containerd-410bbdcf952bce90ad2ea58741749a5bb26f2d1412713184dfc3c52e0c57ffb8.scope - libcontainer container 410bbdcf952bce90ad2ea58741749a5bb26f2d1412713184dfc3c52e0c57ffb8. Apr 16 04:17:01.410321 sshd[7082]: Accepted publickey for core from 10.0.0.1 port 38170 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:17:01.429723 sshd-session[7082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:17:01.552912 systemd-logind[1549]: New session 90 of user core. Apr 16 04:17:01.561067 systemd[1]: Started session-90.scope - Session 90 of User core. Apr 16 04:17:02.634569 containerd[1575]: time="2026-04-16T04:17:02.633816061Z" level=error msg="get state for 410bbdcf952bce90ad2ea58741749a5bb26f2d1412713184dfc3c52e0c57ffb8" error="context deadline exceeded" Apr 16 04:17:02.634569 containerd[1575]: time="2026-04-16T04:17:02.634174836Z" level=warning msg="unknown status" status=0 Apr 16 04:17:04.419804 sshd[7096]: Connection closed by 10.0.0.1 port 38170 Apr 16 04:17:04.425531 sshd-session[7082]: pam_unix(sshd:session): session closed for user core Apr 16 04:17:04.492356 systemd[1]: sshd@89-10.0.0.115:22-10.0.0.1:38170.service: Deactivated successfully. Apr 16 04:17:04.528374 systemd[1]: session-90.scope: Deactivated successfully. Apr 16 04:17:04.543397 systemd-logind[1549]: Session 90 logged out. Waiting for processes to exit. Apr 16 04:17:04.546706 systemd-logind[1549]: Removed session 90. Apr 16 04:17:04.913669 containerd[1575]: time="2026-04-16T04:17:04.892266554Z" level=error msg="get state for 410bbdcf952bce90ad2ea58741749a5bb26f2d1412713184dfc3c52e0c57ffb8" error="context deadline exceeded" Apr 16 04:17:04.913669 containerd[1575]: time="2026-04-16T04:17:04.893051317Z" level=warning msg="unknown status" status=0 Apr 16 04:17:05.734628 containerd[1575]: time="2026-04-16T04:17:05.730678409Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 04:17:05.734628 containerd[1575]: time="2026-04-16T04:17:05.730845187Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 16 04:17:06.493078 containerd[1575]: time="2026-04-16T04:17:06.488653245Z" level=info msg="StartContainer for \"410bbdcf952bce90ad2ea58741749a5bb26f2d1412713184dfc3c52e0c57ffb8\" returns successfully" Apr 16 04:17:06.611179 containerd[1575]: time="2026-04-16T04:17:06.610831283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 16 04:17:06.633900 kubelet[2980]: E0416 04:17:06.605444 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:17:08.805586 containerd[1575]: time="2026-04-16T04:17:08.805459769Z" level=info msg="StopContainer for \"833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254\" with timeout 2 (s)" Apr 16 04:17:08.829391 containerd[1575]: time="2026-04-16T04:17:08.828752307Z" level=info msg="Stop container \"833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254\" with signal terminated" Apr 16 04:17:09.429565 kubelet[2980]: E0416 04:17:09.422643 2980 kuberuntime_container.go:772] "PreStop hook failed" err="command '/bin/calico-node -shutdown' exited with 137: " pod="calico-system/calico-node-kgtx5" podUID="89b5fbad-4c87-4aac-9951-121c09bbd556" containerName="calico-node" containerID="containerd://833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254" Apr 16 04:17:09.779757 systemd[1]: Started sshd@90-10.0.0.115:22-10.0.0.1:59188.service - OpenSSH per-connection server daemon (10.0.0.1:59188). Apr 16 04:17:10.325975 systemd[1]: cri-containerd-833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254.scope: Deactivated successfully. Apr 16 04:17:10.333531 systemd[1]: cri-containerd-833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254.scope: Consumed 23.000s CPU time, 137.7M memory peak, 29M read from disk, 636K written to disk. Apr 16 04:17:10.635914 containerd[1575]: time="2026-04-16T04:17:10.629347094Z" level=info msg="received container exit event container_id:\"833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254\" id:\"833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254\" pid:6461 exited_at:{seconds:1776313030 nanos:499415598}" Apr 16 04:17:11.263320 containerd[1575]: time="2026-04-16T04:17:11.261328356Z" level=info msg="Kill container \"833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254\"" Apr 16 04:17:11.663938 sshd[7186]: Accepted publickey for core from 10.0.0.1 port 59188 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:17:11.718416 sshd-session[7186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:17:12.165476 systemd-logind[1549]: New session 91 of user core. Apr 16 04:17:12.204058 systemd[1]: Started session-91.scope - Session 91 of User core. Apr 16 04:17:12.414246 containerd[1575]: time="2026-04-16T04:17:12.389357032Z" level=error msg="StopContainer for \"833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254\" failed" error="rpc error: code = Unknown desc = failed to kill container \"833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254\": ttrpc: closed" Apr 16 04:17:12.663821 containerd[1575]: time="2026-04-16T04:17:12.406589290Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/s/aeabc3715f557963c617c8591f62e432aca8901fc2a59ed43a1f9f47d5f9452d->@: write: broken pipe" runtime=io.containerd.runc.v2 Apr 16 04:17:12.664160 kubelet[2980]: E0416 04:17:12.606864 2980 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to kill container \"833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254\": ttrpc: closed" containerID="833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254" Apr 16 04:17:12.664160 kubelet[2980]: E0416 04:17:12.661604 2980 kuberuntime_container.go:895] "Container termination failed with gracePeriod" err="rpc error: code = Unknown desc = failed to kill container \"833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254\": ttrpc: closed" pod="calico-system/calico-node-kgtx5" podUID="89b5fbad-4c87-4aac-9951-121c09bbd556" containerName="calico-node" containerID="containerd://833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254" gracePeriod=2 Apr 16 04:17:12.664160 kubelet[2980]: E0416 04:17:12.661651 2980 kuberuntime_manager.go:1437] "killContainer for pod failed" err="rpc error: code = Unknown desc = failed to kill container \"833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254\": ttrpc: closed" containerName="calico-node" containerID={"Type":"containerd","ID":"833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254"} pod="calico-system/calico-node-kgtx5" Apr 16 04:17:12.664160 kubelet[2980]: E0416 04:17:12.661914 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"calico-node\" with KillContainerError: \"rpc error: code = Unknown desc = failed to kill container \\\"833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254\\\": ttrpc: closed\"" pod="calico-system/calico-node-kgtx5" podUID="89b5fbad-4c87-4aac-9951-121c09bbd556" Apr 16 04:17:12.685820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254-rootfs.mount: Deactivated successfully. Apr 16 04:17:14.424324 kubelet[2980]: E0416 04:17:14.423798 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.648s" Apr 16 04:17:14.984342 kubelet[2980]: I0416 04:17:14.979186 2980 scope.go:122] "RemoveContainer" containerID="833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254" Apr 16 04:17:15.559809 containerd[1575]: time="2026-04-16T04:17:15.555769798Z" level=info msg="CreateContainer within sandbox \"d274b66e3a7d173ca74ea653fd497e7a000bb57082609d9eaa03b24e2dc7ade0\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Apr 16 04:17:16.800790 containerd[1575]: time="2026-04-16T04:17:16.798079783Z" level=info msg="Container 9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:17:16.827995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851427262.mount: Deactivated successfully. Apr 16 04:17:17.376414 containerd[1575]: time="2026-04-16T04:17:17.375692009Z" level=info msg="CreateContainer within sandbox \"d274b66e3a7d173ca74ea653fd497e7a000bb57082609d9eaa03b24e2dc7ade0\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\"" Apr 16 04:17:17.519199 containerd[1575]: time="2026-04-16T04:17:17.517561093Z" level=info msg="StartContainer for \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\"" Apr 16 04:17:17.658183 sshd[7203]: Connection closed by 10.0.0.1 port 59188 Apr 16 04:17:17.665680 sshd-session[7186]: pam_unix(sshd:session): session closed for user core Apr 16 04:17:17.819311 containerd[1575]: time="2026-04-16T04:17:17.807427089Z" level=info msg="connecting to shim 9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" address="unix:///run/containerd/s/aeabc3715f557963c617c8591f62e432aca8901fc2a59ed43a1f9f47d5f9452d" protocol=ttrpc version=3 Apr 16 04:17:17.850229 systemd[1]: sshd@90-10.0.0.115:22-10.0.0.1:59188.service: Deactivated successfully. Apr 16 04:17:17.886220 systemd[1]: session-91.scope: Deactivated successfully. Apr 16 04:17:17.901018 systemd[1]: session-91.scope: Consumed 1.211s CPU time, 16.9M memory peak. Apr 16 04:17:18.065462 systemd-logind[1549]: Session 91 logged out. Waiting for processes to exit. Apr 16 04:17:18.358385 systemd-logind[1549]: Removed session 91. Apr 16 04:17:18.863580 kubelet[2980]: E0416 04:17:18.859562 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:17:19.402581 systemd[1]: Started cri-containerd-9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9.scope - libcontainer container 9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9. Apr 16 04:17:21.553606 containerd[1575]: time="2026-04-16T04:17:21.546993656Z" level=error msg="get state for 9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" error="context deadline exceeded" Apr 16 04:17:21.553606 containerd[1575]: time="2026-04-16T04:17:21.551180154Z" level=warning msg="unknown status" status=0 Apr 16 04:17:22.983800 systemd[1]: Started sshd@91-10.0.0.115:22-10.0.0.1:46076.service - OpenSSH per-connection server daemon (10.0.0.1:46076). Apr 16 04:17:23.915715 containerd[1575]: time="2026-04-16T04:17:23.907758269Z" level=error msg="get state for 9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" error="context deadline exceeded" Apr 16 04:17:23.915715 containerd[1575]: time="2026-04-16T04:17:23.908150154Z" level=warning msg="unknown status" status=0 Apr 16 04:17:25.581474 sshd[7246]: Accepted publickey for core from 10.0.0.1 port 46076 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:17:25.611300 sshd-session[7246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:17:25.722666 systemd-logind[1549]: New session 92 of user core. Apr 16 04:17:25.777282 systemd[1]: Started session-92.scope - Session 92 of User core. Apr 16 04:17:26.328012 containerd[1575]: time="2026-04-16T04:17:26.154983661Z" level=error msg="get state for 9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" error="context deadline exceeded" Apr 16 04:17:26.328012 containerd[1575]: time="2026-04-16T04:17:26.328110170Z" level=warning msg="unknown status" status=0 Apr 16 04:17:26.966404 containerd[1575]: time="2026-04-16T04:17:26.966158607Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 04:17:26.966404 containerd[1575]: time="2026-04-16T04:17:26.966318062Z" level=error msg="ttrpc: received message on inactive stream" stream=5 Apr 16 04:17:26.966404 containerd[1575]: time="2026-04-16T04:17:26.966331939Z" level=error msg="ttrpc: received message on inactive stream" stream=7 Apr 16 04:17:28.141469 containerd[1575]: time="2026-04-16T04:17:28.140158442Z" level=info msg="StartContainer for \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" returns successfully" Apr 16 04:17:28.916267 containerd[1575]: time="2026-04-16T04:17:28.908040748Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:17:28.916267 containerd[1575]: time="2026-04-16T04:17:28.910024974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 16 04:17:28.933613 containerd[1575]: time="2026-04-16T04:17:28.932711302Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:17:29.023183 containerd[1575]: time="2026-04-16T04:17:28.976748540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 16 04:17:29.023183 containerd[1575]: time="2026-04-16T04:17:29.015383088Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 22.397587574s" Apr 16 04:17:29.041336 containerd[1575]: time="2026-04-16T04:17:29.023697470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 16 04:17:29.684498 containerd[1575]: time="2026-04-16T04:17:29.680615184Z" level=info msg="CreateContainer within sandbox \"e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 16 04:17:30.062257 containerd[1575]: time="2026-04-16T04:17:30.041491986Z" level=info msg="Container 7b2ec82ae169c492f9b993a65e87ff0dc2a26308a7005643b39a9af1f78a14d8: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:17:30.062634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2159586698.mount: Deactivated successfully. Apr 16 04:17:30.173922 sshd[7250]: Connection closed by 10.0.0.1 port 46076 Apr 16 04:17:30.179227 sshd-session[7246]: pam_unix(sshd:session): session closed for user core Apr 16 04:17:30.264811 systemd[1]: sshd@91-10.0.0.115:22-10.0.0.1:46076.service: Deactivated successfully. Apr 16 04:17:30.371817 systemd-logind[1549]: Session 92 logged out. Waiting for processes to exit. Apr 16 04:17:30.405536 systemd[1]: session-92.scope: Deactivated successfully. Apr 16 04:17:30.483384 systemd-logind[1549]: Removed session 92. Apr 16 04:17:30.798727 containerd[1575]: time="2026-04-16T04:17:30.798537326Z" level=info msg="CreateContainer within sandbox \"e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7b2ec82ae169c492f9b993a65e87ff0dc2a26308a7005643b39a9af1f78a14d8\"" Apr 16 04:17:31.019845 containerd[1575]: time="2026-04-16T04:17:31.008422659Z" level=info msg="StartContainer for \"7b2ec82ae169c492f9b993a65e87ff0dc2a26308a7005643b39a9af1f78a14d8\"" Apr 16 04:17:31.922875 containerd[1575]: time="2026-04-16T04:17:31.914870988Z" level=info msg="connecting to shim 7b2ec82ae169c492f9b993a65e87ff0dc2a26308a7005643b39a9af1f78a14d8" address="unix:///run/containerd/s/5f7dd243c3c1bc3f60c2f6f52ee4b4520af0962bfd7e12825b897381f6e09c2f" protocol=ttrpc version=3 Apr 16 04:17:33.697987 systemd[1]: Started cri-containerd-7b2ec82ae169c492f9b993a65e87ff0dc2a26308a7005643b39a9af1f78a14d8.scope - libcontainer container 7b2ec82ae169c492f9b993a65e87ff0dc2a26308a7005643b39a9af1f78a14d8. Apr 16 04:17:35.592449 systemd[1]: Started sshd@92-10.0.0.115:22-10.0.0.1:50862.service - OpenSSH per-connection server daemon (10.0.0.1:50862). Apr 16 04:17:35.800417 containerd[1575]: time="2026-04-16T04:17:35.749421686Z" level=error msg="get state for 7b2ec82ae169c492f9b993a65e87ff0dc2a26308a7005643b39a9af1f78a14d8" error="context deadline exceeded" Apr 16 04:17:35.800417 containerd[1575]: time="2026-04-16T04:17:35.749769330Z" level=warning msg="unknown status" status=0 Apr 16 04:17:36.811873 containerd[1575]: time="2026-04-16T04:17:36.811395009Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 04:17:37.025315 sshd[7326]: Accepted publickey for core from 10.0.0.1 port 50862 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:17:37.172812 sshd-session[7326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:17:37.615061 systemd-logind[1549]: New session 93 of user core. Apr 16 04:17:37.633699 systemd[1]: Started session-93.scope - Session 93 of User core. Apr 16 04:17:38.446321 containerd[1575]: time="2026-04-16T04:17:38.445050539Z" level=info msg="StartContainer for \"7b2ec82ae169c492f9b993a65e87ff0dc2a26308a7005643b39a9af1f78a14d8\" returns successfully" Apr 16 04:17:42.219113 sshd[7335]: Connection closed by 10.0.0.1 port 50862 Apr 16 04:17:42.215478 sshd-session[7326]: pam_unix(sshd:session): session closed for user core Apr 16 04:17:42.345221 systemd[1]: sshd@92-10.0.0.115:22-10.0.0.1:50862.service: Deactivated successfully. Apr 16 04:17:42.557903 systemd[1]: session-93.scope: Deactivated successfully. Apr 16 04:17:42.573867 systemd[1]: session-93.scope: Consumed 1.179s CPU time, 17.7M memory peak. Apr 16 04:17:42.674042 systemd-logind[1549]: Session 93 logged out. Waiting for processes to exit. Apr 16 04:17:42.880653 systemd-logind[1549]: Removed session 93. Apr 16 04:17:44.810536 kubelet[2980]: I0416 04:17:44.809014 2980 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 16 04:17:44.810536 kubelet[2980]: I0416 04:17:44.809837 2980 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 16 04:17:47.760614 systemd[1]: Started sshd@93-10.0.0.115:22-10.0.0.1:57686.service - OpenSSH per-connection server daemon (10.0.0.1:57686). Apr 16 04:17:49.832767 sshd[7390]: Accepted publickey for core from 10.0.0.1 port 57686 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:17:49.835714 sshd-session[7390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:17:50.232728 systemd[1]: Started session-94.scope - Session 94 of User core. Apr 16 04:17:50.249166 systemd-logind[1549]: New session 94 of user core. Apr 16 04:17:54.250500 sshd[7394]: Connection closed by 10.0.0.1 port 57686 Apr 16 04:17:54.271958 sshd-session[7390]: pam_unix(sshd:session): session closed for user core Apr 16 04:17:54.294325 systemd[1]: sshd@93-10.0.0.115:22-10.0.0.1:57686.service: Deactivated successfully. Apr 16 04:17:54.664106 systemd[1]: session-94.scope: Deactivated successfully. Apr 16 04:17:54.666655 systemd[1]: session-94.scope: Consumed 1.097s CPU time, 15.6M memory peak. Apr 16 04:17:54.725820 systemd-logind[1549]: Session 94 logged out. Waiting for processes to exit. Apr 16 04:17:54.799355 systemd-logind[1549]: Removed session 94. Apr 16 04:17:55.035911 containerd[1575]: time="2026-04-16T04:17:55.019640278Z" level=error msg="ExecSync for \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 16 04:17:55.053520 kubelet[2980]: E0416 04:17:55.029667 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:17:59.717054 systemd[1]: Started sshd@94-10.0.0.115:22-10.0.0.1:48754.service - OpenSSH per-connection server daemon (10.0.0.1:48754). Apr 16 04:18:02.869827 sshd[7417]: Accepted publickey for core from 10.0.0.1 port 48754 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:18:03.055173 sshd-session[7417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:18:04.049977 systemd-logind[1549]: New session 95 of user core. Apr 16 04:18:04.105887 systemd[1]: Started session-95.scope - Session 95 of User core. Apr 16 04:18:06.874369 kubelet[2980]: E0416 04:18:06.871351 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:18:07.628563 kubelet[2980]: E0416 04:18:07.626819 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:18:10.944407 sshd[7424]: Connection closed by 10.0.0.1 port 48754 Apr 16 04:18:10.952309 sshd-session[7417]: pam_unix(sshd:session): session closed for user core Apr 16 04:18:11.066885 systemd-logind[1549]: Session 95 logged out. Waiting for processes to exit. Apr 16 04:18:11.142931 systemd[1]: sshd@94-10.0.0.115:22-10.0.0.1:48754.service: Deactivated successfully. Apr 16 04:18:11.324768 systemd[1]: session-95.scope: Deactivated successfully. Apr 16 04:18:11.344455 systemd[1]: session-95.scope: Consumed 1.861s CPU time, 15.3M memory peak. Apr 16 04:18:11.351461 systemd-logind[1549]: Removed session 95. Apr 16 04:18:16.397750 systemd[1]: Started sshd@95-10.0.0.115:22-10.0.0.1:34076.service - OpenSSH per-connection server daemon (10.0.0.1:34076). Apr 16 04:18:20.542307 sshd[7471]: Accepted publickey for core from 10.0.0.1 port 34076 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:18:20.655043 sshd-session[7471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:18:21.123291 systemd-logind[1549]: New session 96 of user core. Apr 16 04:18:21.229369 systemd[1]: Started session-96.scope - Session 96 of User core. Apr 16 04:18:23.402206 sshd[7495]: Connection closed by 10.0.0.1 port 34076 Apr 16 04:18:23.462780 sshd-session[7471]: pam_unix(sshd:session): session closed for user core Apr 16 04:18:23.542988 systemd[1]: sshd@95-10.0.0.115:22-10.0.0.1:34076.service: Deactivated successfully. Apr 16 04:18:23.608885 systemd[1]: session-96.scope: Deactivated successfully. Apr 16 04:18:23.643516 systemd-logind[1549]: Session 96 logged out. Waiting for processes to exit. Apr 16 04:18:23.649565 systemd-logind[1549]: Removed session 96. Apr 16 04:18:25.566943 containerd[1575]: time="2026-04-16T04:18:25.559041871Z" level=error msg="ExecSync for \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 16 04:18:25.757892 kubelet[2980]: E0416 04:18:25.676405 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:18:29.135428 systemd[1]: Started sshd@96-10.0.0.115:22-10.0.0.1:44490.service - OpenSSH per-connection server daemon (10.0.0.1:44490). Apr 16 04:18:31.429272 sshd[7604]: Accepted publickey for core from 10.0.0.1 port 44490 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:18:31.603767 sshd-session[7604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:18:31.732633 systemd-logind[1549]: New session 97 of user core. Apr 16 04:18:31.940651 systemd[1]: Started session-97.scope - Session 97 of User core. Apr 16 04:18:36.760694 kubelet[2980]: E0416 04:18:36.682618 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:18:38.401323 containerd[1575]: time="2026-04-16T04:18:38.394882781Z" level=warning msg="container event discarded" container=30dc5e828fdc9d0903bc8b8999a428e8b48bdd1b16a5248c764c1a2b875de7e8 type=CONTAINER_CREATED_EVENT Apr 16 04:18:38.830910 sshd[7610]: Connection closed by 10.0.0.1 port 44490 Apr 16 04:18:38.831164 sshd-session[7604]: pam_unix(sshd:session): session closed for user core Apr 16 04:18:38.848903 systemd-logind[1549]: Session 97 logged out. Waiting for processes to exit. Apr 16 04:18:38.850189 systemd[1]: sshd@96-10.0.0.115:22-10.0.0.1:44490.service: Deactivated successfully. Apr 16 04:18:38.855238 systemd[1]: session-97.scope: Deactivated successfully. Apr 16 04:18:38.855512 systemd[1]: session-97.scope: Consumed 1.844s CPU time, 15.3M memory peak. Apr 16 04:18:38.879383 systemd-logind[1549]: Removed session 97. Apr 16 04:18:43.514137 containerd[1575]: time="2026-04-16T04:18:43.502338321Z" level=warning msg="container event discarded" container=30dc5e828fdc9d0903bc8b8999a428e8b48bdd1b16a5248c764c1a2b875de7e8 type=CONTAINER_STARTED_EVENT Apr 16 04:18:44.164919 systemd[1]: Started sshd@97-10.0.0.115:22-10.0.0.1:37044.service - OpenSSH per-connection server daemon (10.0.0.1:37044). Apr 16 04:18:45.214676 kubelet[2980]: E0416 04:18:45.212143 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:18:47.999371 sshd[7662]: Accepted publickey for core from 10.0.0.1 port 37044 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:18:48.231154 sshd-session[7662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:18:48.817072 systemd-logind[1549]: New session 98 of user core. Apr 16 04:18:48.828011 systemd[1]: Started session-98.scope - Session 98 of User core. Apr 16 04:18:52.984999 sshd[7668]: Connection closed by 10.0.0.1 port 37044 Apr 16 04:18:53.028500 sshd-session[7662]: pam_unix(sshd:session): session closed for user core Apr 16 04:18:53.277483 systemd[1]: sshd@97-10.0.0.115:22-10.0.0.1:37044.service: Deactivated successfully. Apr 16 04:18:53.639122 systemd[1]: session-98.scope: Deactivated successfully. Apr 16 04:18:53.782637 systemd[1]: session-98.scope: Consumed 1.280s CPU time, 15.1M memory peak. Apr 16 04:18:53.840976 systemd-logind[1549]: Session 98 logged out. Waiting for processes to exit. Apr 16 04:18:54.078270 systemd-logind[1549]: Removed session 98. Apr 16 04:18:55.836447 containerd[1575]: time="2026-04-16T04:18:55.820279005Z" level=error msg="ExecSync for \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 16 04:18:55.884421 kubelet[2980]: E0416 04:18:55.855722 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:18:58.330027 systemd[1]: Started sshd@98-10.0.0.115:22-10.0.0.1:54574.service - OpenSSH per-connection server daemon (10.0.0.1:54574). Apr 16 04:19:00.589791 sshd[7720]: Accepted publickey for core from 10.0.0.1 port 54574 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:19:00.706936 sshd-session[7720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:19:01.262958 systemd-logind[1549]: New session 99 of user core. Apr 16 04:19:01.289007 systemd[1]: Started session-99.scope - Session 99 of User core. Apr 16 04:19:05.364566 sshd[7729]: Connection closed by 10.0.0.1 port 54574 Apr 16 04:19:05.490937 sshd-session[7720]: pam_unix(sshd:session): session closed for user core Apr 16 04:19:05.551029 systemd[1]: sshd@98-10.0.0.115:22-10.0.0.1:54574.service: Deactivated successfully. Apr 16 04:19:05.687475 systemd[1]: session-99.scope: Deactivated successfully. Apr 16 04:19:05.698317 systemd[1]: session-99.scope: Consumed 1.366s CPU time, 16.2M memory peak. Apr 16 04:19:05.723322 systemd-logind[1549]: Session 99 logged out. Waiting for processes to exit. Apr 16 04:19:05.731722 systemd-logind[1549]: Removed session 99. Apr 16 04:19:10.092612 containerd[1575]: time="2026-04-16T04:19:10.092252796Z" level=error msg="ExecSync for \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 16 04:19:10.117790 kubelet[2980]: E0416 04:19:10.113753 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:19:10.955165 systemd[1]: Started sshd@99-10.0.0.115:22-10.0.0.1:33786.service - OpenSSH per-connection server daemon (10.0.0.1:33786). Apr 16 04:19:12.946534 sshd[7756]: Accepted publickey for core from 10.0.0.1 port 33786 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:19:12.996032 sshd-session[7756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:19:13.567934 systemd[1]: Started session-100.scope - Session 100 of User core. Apr 16 04:19:13.585878 systemd-logind[1549]: New session 100 of user core. Apr 16 04:19:18.989179 sshd[7764]: Connection closed by 10.0.0.1 port 33786 Apr 16 04:19:19.021944 sshd-session[7756]: pam_unix(sshd:session): session closed for user core Apr 16 04:19:19.482352 systemd[1]: sshd@99-10.0.0.115:22-10.0.0.1:33786.service: Deactivated successfully. Apr 16 04:19:19.606952 systemd[1]: session-100.scope: Deactivated successfully. Apr 16 04:19:19.607584 systemd[1]: session-100.scope: Consumed 1.437s CPU time, 15.3M memory peak. Apr 16 04:19:19.736531 systemd-logind[1549]: Session 100 logged out. Waiting for processes to exit. Apr 16 04:19:19.739961 systemd-logind[1549]: Removed session 100. Apr 16 04:19:24.408754 systemd[1]: Started sshd@100-10.0.0.115:22-10.0.0.1:45652.service - OpenSSH per-connection server daemon (10.0.0.1:45652). Apr 16 04:19:27.704598 kubelet[2980]: E0416 04:19:27.704513 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:19:28.671519 kubelet[2980]: E0416 04:19:28.667597 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:19:29.991748 sshd[7786]: Accepted publickey for core from 10.0.0.1 port 45652 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:19:30.936180 sshd-session[7786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:19:31.226153 systemd-logind[1549]: New session 101 of user core. Apr 16 04:19:31.269060 systemd[1]: Started session-101.scope - Session 101 of User core. Apr 16 04:19:33.963043 kubelet[2980]: E0416 04:19:33.954028 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.387s" Apr 16 04:19:37.013552 kubelet[2980]: E0416 04:19:37.001332 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.334s" Apr 16 04:19:37.567966 containerd[1575]: time="2026-04-16T04:19:37.559436388Z" level=warning msg="container event discarded" container=0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412 type=CONTAINER_STOPPED_EVENT Apr 16 04:19:38.098045 kubelet[2980]: E0416 04:19:38.097636 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.092s" Apr 16 04:19:39.099629 containerd[1575]: time="2026-04-16T04:19:39.095628955Z" level=warning msg="container event discarded" container=461471d106f4cb582b9204547f874066279f9837e74f2f87ff0feec2a7bbf142 type=CONTAINER_DELETED_EVENT Apr 16 04:19:40.847230 sshd[7813]: Connection closed by 10.0.0.1 port 45652 Apr 16 04:19:40.874892 sshd-session[7786]: pam_unix(sshd:session): session closed for user core Apr 16 04:19:41.047628 systemd[1]: sshd@100-10.0.0.115:22-10.0.0.1:45652.service: Deactivated successfully. Apr 16 04:19:41.079293 systemd[1]: session-101.scope: Deactivated successfully. Apr 16 04:19:41.079602 systemd[1]: session-101.scope: Consumed 2.261s CPU time, 18M memory peak. Apr 16 04:19:41.104433 systemd-logind[1549]: Session 101 logged out. Waiting for processes to exit. Apr 16 04:19:41.151493 systemd-logind[1549]: Removed session 101. Apr 16 04:19:46.804926 systemd[1]: Started sshd@101-10.0.0.115:22-10.0.0.1:37648.service - OpenSSH per-connection server daemon (10.0.0.1:37648). Apr 16 04:19:47.209366 containerd[1575]: time="2026-04-16T04:19:47.208776681Z" level=warning msg="container event discarded" container=30dc5e828fdc9d0903bc8b8999a428e8b48bdd1b16a5248c764c1a2b875de7e8 type=CONTAINER_STOPPED_EVENT Apr 16 04:19:49.001257 containerd[1575]: time="2026-04-16T04:19:48.997570069Z" level=warning msg="container event discarded" container=833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254 type=CONTAINER_CREATED_EVENT Apr 16 04:19:51.862369 containerd[1575]: time="2026-04-16T04:19:51.709401491Z" level=warning msg="container event discarded" container=833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254 type=CONTAINER_STARTED_EVENT Apr 16 04:19:52.475806 kubelet[2980]: E0416 04:19:52.288551 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.664s" Apr 16 04:19:53.179611 containerd[1575]: time="2026-04-16T04:19:53.085189981Z" level=error msg="ExecSync for \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 16 04:19:53.222481 kubelet[2980]: E0416 04:19:53.215863 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:19:53.780891 kubelet[2980]: E0416 04:19:53.631988 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:19:54.461119 sshd[7859]: Accepted publickey for core from 10.0.0.1 port 37648 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:19:54.631354 sshd-session[7859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:19:56.247244 systemd-logind[1549]: New session 102 of user core. Apr 16 04:19:56.294048 systemd[1]: Started session-102.scope - Session 102 of User core. Apr 16 04:20:01.705677 sshd[7866]: Connection closed by 10.0.0.1 port 37648 Apr 16 04:20:01.710027 sshd-session[7859]: pam_unix(sshd:session): session closed for user core Apr 16 04:20:02.483259 systemd[1]: sshd@101-10.0.0.115:22-10.0.0.1:37648.service: Deactivated successfully. Apr 16 04:20:02.540044 systemd[1]: sshd@101-10.0.0.115:22-10.0.0.1:37648.service: Consumed 1.020s CPU time, 3.8M memory peak. Apr 16 04:20:02.544771 systemd[1]: session-102.scope: Deactivated successfully. Apr 16 04:20:02.559991 systemd[1]: session-102.scope: Consumed 1.427s CPU time, 15.7M memory peak. Apr 16 04:20:02.783315 systemd-logind[1549]: Session 102 logged out. Waiting for processes to exit. Apr 16 04:20:03.337032 systemd-logind[1549]: Removed session 102. Apr 16 04:20:06.581810 kubelet[2980]: E0416 04:20:06.578907 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:20:07.225524 systemd[1]: Started sshd@102-10.0.0.115:22-10.0.0.1:56000.service - OpenSSH per-connection server daemon (10.0.0.1:56000). Apr 16 04:20:09.704826 sshd[7904]: Accepted publickey for core from 10.0.0.1 port 56000 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:20:09.967073 sshd-session[7904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:20:10.888442 systemd-logind[1549]: New session 103 of user core. Apr 16 04:20:11.040623 systemd[1]: Started session-103.scope - Session 103 of User core. Apr 16 04:20:17.401274 sshd[7910]: Connection closed by 10.0.0.1 port 56000 Apr 16 04:20:17.402903 sshd-session[7904]: pam_unix(sshd:session): session closed for user core Apr 16 04:20:17.538443 containerd[1575]: time="2026-04-16T04:20:17.404891656Z" level=error msg="ExecSync for \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 16 04:20:17.746862 kubelet[2980]: E0416 04:20:17.405390 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:20:17.894222 systemd[1]: sshd@102-10.0.0.115:22-10.0.0.1:56000.service: Deactivated successfully. Apr 16 04:20:18.291729 systemd[1]: session-103.scope: Deactivated successfully. Apr 16 04:20:18.304927 systemd[1]: session-103.scope: Consumed 2.192s CPU time, 15M memory peak. Apr 16 04:20:18.443367 systemd-logind[1549]: Session 103 logged out. Waiting for processes to exit. Apr 16 04:20:18.788547 systemd-logind[1549]: Removed session 103. Apr 16 04:20:22.550993 systemd[1]: Started sshd@103-10.0.0.115:22-10.0.0.1:48710.service - OpenSSH per-connection server daemon (10.0.0.1:48710). Apr 16 04:20:25.355063 sshd[7953]: Accepted publickey for core from 10.0.0.1 port 48710 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:20:25.492980 sshd-session[7953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:20:25.693754 systemd-logind[1549]: New session 104 of user core. Apr 16 04:20:25.861429 systemd[1]: Started session-104.scope - Session 104 of User core. Apr 16 04:20:29.527652 sshd[7962]: Connection closed by 10.0.0.1 port 48710 Apr 16 04:20:29.545759 sshd-session[7953]: pam_unix(sshd:session): session closed for user core Apr 16 04:20:29.931963 systemd[1]: sshd@103-10.0.0.115:22-10.0.0.1:48710.service: Deactivated successfully. Apr 16 04:20:30.203908 systemd[1]: session-104.scope: Deactivated successfully. Apr 16 04:20:30.220863 systemd[1]: session-104.scope: Consumed 1.043s CPU time, 15M memory peak. Apr 16 04:20:30.312814 systemd-logind[1549]: Session 104 logged out. Waiting for processes to exit. Apr 16 04:20:30.327818 systemd-logind[1549]: Removed session 104. Apr 16 04:20:34.761574 kubelet[2980]: E0416 04:20:34.715162 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.262s" Apr 16 04:20:36.203447 systemd[1]: Started sshd@104-10.0.0.115:22-10.0.0.1:50206.service - OpenSSH per-connection server daemon (10.0.0.1:50206). Apr 16 04:20:36.902474 kubelet[2980]: E0416 04:20:36.896502 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.122s" Apr 16 04:20:45.056300 kubelet[2980]: E0416 04:20:45.055029 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.109s" Apr 16 04:20:45.966592 sshd[7981]: Accepted publickey for core from 10.0.0.1 port 50206 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:20:46.273556 sshd-session[7981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:20:46.785149 systemd-logind[1549]: New session 105 of user core. Apr 16 04:20:47.021966 systemd[1]: Started session-105.scope - Session 105 of User core. Apr 16 04:20:47.225876 kubelet[2980]: E0416 04:20:46.959310 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.899s" Apr 16 04:20:49.140833 kubelet[2980]: E0416 04:20:49.140702 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.11s" Apr 16 04:20:52.743353 kubelet[2980]: E0416 04:20:52.742588 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.206s" Apr 16 04:20:55.312706 kubelet[2980]: E0416 04:20:55.311871 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:20:55.928755 kubelet[2980]: E0416 04:20:55.928039 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:21:02.573432 kubelet[2980]: E0416 04:21:02.568481 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.9s" Apr 16 04:21:03.530616 sshd[7985]: Connection closed by 10.0.0.1 port 50206 Apr 16 04:21:03.662754 sshd-session[7981]: pam_unix(sshd:session): session closed for user core Apr 16 04:21:05.290013 systemd[1]: sshd@104-10.0.0.115:22-10.0.0.1:50206.service: Deactivated successfully. Apr 16 04:21:05.473849 systemd[1]: sshd@104-10.0.0.115:22-10.0.0.1:50206.service: Consumed 1.291s CPU time, 9.2M memory peak. Apr 16 04:21:06.187842 systemd[1]: session-105.scope: Deactivated successfully. Apr 16 04:21:06.333358 systemd[1]: session-105.scope: Consumed 1.763s CPU time, 29.9M memory peak. Apr 16 04:21:07.300589 systemd-logind[1549]: Session 105 logged out. Waiting for processes to exit. Apr 16 04:21:08.605882 systemd-logind[1549]: Removed session 105. Apr 16 04:21:08.899527 containerd[1575]: time="2026-04-16T04:21:07.945109189Z" level=warning msg="container event discarded" container=ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778 type=CONTAINER_CREATED_EVENT Apr 16 04:21:10.519361 systemd[1]: Started sshd@105-10.0.0.115:22-10.0.0.1:45978.service - OpenSSH per-connection server daemon (10.0.0.1:45978). Apr 16 04:21:10.893659 systemd[1]: cri-containerd-ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778.scope: Deactivated successfully. Apr 16 04:21:11.099857 systemd[1]: cri-containerd-ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778.scope: Consumed 23.923s CPU time, 28.6M memory peak, 6.5M read from disk. Apr 16 04:21:11.290880 systemd[1]: cri-containerd-37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6.scope: Deactivated successfully. Apr 16 04:21:11.566448 systemd[1]: cri-containerd-37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6.scope: Consumed 33.271s CPU time, 65.9M memory peak, 4.7M read from disk. Apr 16 04:21:14.259875 containerd[1575]: time="2026-04-16T04:21:14.233398465Z" level=warning msg="container event discarded" container=a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9 type=CONTAINER_CREATED_EVENT Apr 16 04:21:15.762596 containerd[1575]: time="2026-04-16T04:21:15.513024125Z" level=info msg="received container exit event container_id:\"37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6\" id:\"37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6\" pid:6770 exit_status:1 exited_at:{seconds:1776313275 nanos:406943103}" Apr 16 04:21:16.364844 containerd[1575]: time="2026-04-16T04:21:15.933676830Z" level=info msg="received container exit event container_id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" pid:6742 exit_status:1 exited_at:{seconds:1776313274 nanos:836058031}" Apr 16 04:21:19.591972 systemd[1]: cri-containerd-a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9.scope: Deactivated successfully. Apr 16 04:21:19.686758 systemd[1]: cri-containerd-a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9.scope: Consumed 22.655s CPU time, 70.8M memory peak, 2.7M read from disk. Apr 16 04:21:20.884027 containerd[1575]: time="2026-04-16T04:21:20.850883076Z" level=warning msg="container event discarded" container=37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6 type=CONTAINER_CREATED_EVENT Apr 16 04:21:21.678310 containerd[1575]: time="2026-04-16T04:21:21.552066192Z" level=info msg="received container exit event container_id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" pid:6740 exit_status:1 exited_at:{seconds:1776313280 nanos:646819312}" Apr 16 04:21:26.346266 containerd[1575]: time="2026-04-16T04:21:26.319067654Z" level=error msg="failed to handle container TaskExit event container_id:\"37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6\" id:\"37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6\" pid:6770 exit_status:1 exited_at:{seconds:1776313275 nanos:406943103}" error="failed to stop container: context deadline exceeded" Apr 16 04:21:27.131127 containerd[1575]: time="2026-04-16T04:21:26.704032169Z" level=error msg="failed to handle container TaskExit event container_id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" pid:6742 exit_status:1 exited_at:{seconds:1776313274 nanos:836058031}" error="failed to stop container: context deadline exceeded" Apr 16 04:21:27.667123 containerd[1575]: time="2026-04-16T04:21:27.647780871Z" level=error msg="ttrpc: received message on inactive stream" stream=65 Apr 16 04:21:28.085230 containerd[1575]: time="2026-04-16T04:21:28.080586323Z" level=error msg="ttrpc: received message on inactive stream" stream=65 Apr 16 04:21:28.085230 containerd[1575]: time="2026-04-16T04:21:28.081967416Z" level=error msg="ttrpc: received message on inactive stream" stream=69 Apr 16 04:21:28.269729 containerd[1575]: time="2026-04-16T04:21:28.266044861Z" level=info msg="TaskExit event container_id:\"37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6\" id:\"37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6\" pid:6770 exit_status:1 exited_at:{seconds:1776313275 nanos:406943103}" Apr 16 04:21:31.882537 sshd[8022]: Accepted publickey for core from 10.0.0.1 port 45978 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:21:32.473563 containerd[1575]: time="2026-04-16T04:21:32.470453036Z" level=error msg="failed to handle container TaskExit event container_id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" pid:6740 exit_status:1 exited_at:{seconds:1776313280 nanos:646819312}" error="failed to stop container: context deadline exceeded" Apr 16 04:21:32.557315 sshd-session[8022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:21:33.311400 containerd[1575]: time="2026-04-16T04:21:33.309172876Z" level=error msg="ttrpc: received message on inactive stream" stream=69 Apr 16 04:21:33.336946 containerd[1575]: time="2026-04-16T04:21:33.336392043Z" level=error msg="ttrpc: received message on inactive stream" stream=65 Apr 16 04:21:33.569875 systemd-logind[1549]: New session 106 of user core. Apr 16 04:21:33.601170 systemd[1]: Started session-106.scope - Session 106 of User core. Apr 16 04:21:33.730398 containerd[1575]: time="2026-04-16T04:21:33.728364586Z" level=warning msg="container event discarded" container=a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9 type=CONTAINER_STARTED_EVENT Apr 16 04:21:34.052357 containerd[1575]: time="2026-04-16T04:21:34.051785689Z" level=warning msg="container event discarded" container=ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778 type=CONTAINER_STARTED_EVENT Apr 16 04:21:34.217618 kubelet[2980]: E0416 04:21:34.193933 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="31.566s" Apr 16 04:21:34.287717 kubelet[2980]: E0416 04:21:34.283897 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:21:34.342538 kubelet[2980]: E0416 04:21:34.292378 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:21:34.365738 kubelet[2980]: E0416 04:21:34.365323 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:21:34.564686 containerd[1575]: time="2026-04-16T04:21:34.517472443Z" level=warning msg="container event discarded" container=37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6 type=CONTAINER_STARTED_EVENT Apr 16 04:21:34.884803 kubelet[2980]: I0416 04:21:34.874974 2980 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-r69h8" podStartSLOduration=886.125225537 podStartE2EDuration="15m30.851585153s" podCreationTimestamp="2026-04-16 04:06:04 +0000 UTC" firstStartedPulling="2026-04-16 04:16:44.355379808 +0000 UTC m=+1473.395874797" lastFinishedPulling="2026-04-16 04:17:29.081739423 +0000 UTC m=+1518.122234413" observedRunningTime="2026-04-16 04:17:41.257547542 +0000 UTC m=+1530.298042538" watchObservedRunningTime="2026-04-16 04:21:34.851585153 +0000 UTC m=+1763.892080152" Apr 16 04:21:36.404634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6-rootfs.mount: Deactivated successfully. Apr 16 04:21:37.844951 sshd[8045]: Connection closed by 10.0.0.1 port 45978 Apr 16 04:21:37.980405 sshd-session[8022]: pam_unix(sshd:session): session closed for user core Apr 16 04:21:41.057021 systemd[1]: sshd@105-10.0.0.115:22-10.0.0.1:45978.service: Deactivated successfully. Apr 16 04:21:41.230067 systemd[1]: sshd@105-10.0.0.115:22-10.0.0.1:45978.service: Consumed 5.118s CPU time, 3.5M memory peak. Apr 16 04:21:41.665625 containerd[1575]: time="2026-04-16T04:21:41.280281356Z" level=info msg="TaskExit event container_id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" pid:6742 exit_status:1 exited_at:{seconds:1776313274 nanos:836058031}" Apr 16 04:21:41.688627 systemd[1]: session-106.scope: Deactivated successfully. Apr 16 04:21:41.695774 systemd[1]: session-106.scope: Consumed 1.905s CPU time, 18.7M memory peak. Apr 16 04:21:41.822609 systemd-logind[1549]: Session 106 logged out. Waiting for processes to exit. Apr 16 04:21:42.457456 systemd-logind[1549]: Removed session 106. Apr 16 04:21:43.806443 kubelet[2980]: E0416 04:21:43.803464 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.392s" Apr 16 04:21:44.282007 containerd[1575]: time="2026-04-16T04:21:44.061894932Z" level=warning msg="container event discarded" container=e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec type=CONTAINER_CREATED_EVENT Apr 16 04:21:44.282007 containerd[1575]: time="2026-04-16T04:21:44.062763171Z" level=warning msg="container event discarded" container=e3ea8b73129859c183c013c7dcebcacf6f6405ce5c33abaf54f398449c6746ec type=CONTAINER_STARTED_EVENT Apr 16 04:21:44.499039 systemd[1]: Started sshd@106-10.0.0.115:22-10.0.0.1:35666.service - OpenSSH per-connection server daemon (10.0.0.1:35666). Apr 16 04:21:47.785257 containerd[1575]: time="2026-04-16T04:21:47.781692431Z" level=error msg="ExecSync for \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 16 04:21:48.911750 kubelet[2980]: E0416 04:21:48.412062 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:21:49.395842 kubelet[2980]: E0416 04:21:49.112534 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.148s" Apr 16 04:21:49.395842 kubelet[2980]: I0416 04:21:49.180781 2980 scope.go:122] "RemoveContainer" containerID="1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090" Apr 16 04:21:50.308658 kubelet[2980]: I0416 04:21:50.295031 2980 scope.go:122] "RemoveContainer" containerID="37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6" Apr 16 04:21:50.911810 kubelet[2980]: E0416 04:21:50.577971 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:21:51.500808 containerd[1575]: time="2026-04-16T04:21:51.491380007Z" level=error msg="Failed to handle backOff event container_id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" pid:6742 exit_status:1 exited_at:{seconds:1776313274 nanos:836058031} for ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 16 04:21:51.500808 containerd[1575]: time="2026-04-16T04:21:51.491850996Z" level=info msg="TaskExit event container_id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" pid:6740 exit_status:1 exited_at:{seconds:1776313280 nanos:646819312}" Apr 16 04:21:52.289626 containerd[1575]: time="2026-04-16T04:21:52.270075451Z" level=error msg="ttrpc: received message on inactive stream" stream=77 Apr 16 04:21:52.372722 containerd[1575]: time="2026-04-16T04:21:52.313248691Z" level=error msg="ttrpc: received message on inactive stream" stream=81 Apr 16 04:21:52.378769 kubelet[2980]: E0416 04:21:51.861866 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 04:21:52.667646 containerd[1575]: time="2026-04-16T04:21:52.609880969Z" level=info msg="RemoveContainer for \"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\"" Apr 16 04:21:54.299507 sshd[8072]: Accepted publickey for core from 10.0.0.1 port 35666 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:21:54.835567 sshd-session[8072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:21:55.241639 kubelet[2980]: I0416 04:21:55.236443 2980 scope.go:122] "RemoveContainer" containerID="1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090" Apr 16 04:21:55.680444 systemd-logind[1549]: New session 107 of user core. Apr 16 04:21:56.003391 systemd[1]: Started session-107.scope - Session 107 of User core. Apr 16 04:21:59.281958 containerd[1575]: time="2026-04-16T04:21:59.135902731Z" level=info msg="RemoveContainer for \"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\" returns successfully" Apr 16 04:21:59.731625 containerd[1575]: time="2026-04-16T04:21:59.658431849Z" level=warning msg="container event discarded" container=410bbdcf952bce90ad2ea58741749a5bb26f2d1412713184dfc3c52e0c57ffb8 type=CONTAINER_CREATED_EVENT Apr 16 04:22:02.290668 containerd[1575]: time="2026-04-16T04:22:02.278730614Z" level=error msg="Failed to handle backOff event container_id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" pid:6740 exit_status:1 exited_at:{seconds:1776313280 nanos:646819312} for a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 16 04:22:02.290668 containerd[1575]: time="2026-04-16T04:22:02.282575312Z" level=info msg="TaskExit event container_id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" pid:6742 exit_status:1 exited_at:{seconds:1776313274 nanos:836058031}" Apr 16 04:22:03.945658 containerd[1575]: time="2026-04-16T04:22:02.282719034Z" level=error msg="ContainerStatus for \"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\": not found" Apr 16 04:22:03.945658 containerd[1575]: time="2026-04-16T04:22:03.507574176Z" level=error msg="ttrpc: received message on inactive stream" stream=81 Apr 16 04:22:04.104609 kubelet[2980]: E0416 04:22:03.471648 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.19s" Apr 16 04:22:06.985341 kubelet[2980]: E0416 04:22:06.075418 2980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\": not found" containerID="1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090" Apr 16 04:22:07.827817 kubelet[2980]: E0416 04:22:07.165602 2980 kuberuntime_gc.go:151] "Failed to remove container" err="failed to get container status \"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\": rpc error: code = NotFound desc = an error occurred when try to find container \"1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090\": not found" containerID="1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090" Apr 16 04:22:08.459554 containerd[1575]: time="2026-04-16T04:22:08.383661960Z" level=warning msg="container event discarded" container=410bbdcf952bce90ad2ea58741749a5bb26f2d1412713184dfc3c52e0c57ffb8 type=CONTAINER_STARTED_EVENT Apr 16 04:22:12.912224 containerd[1575]: time="2026-04-16T04:22:12.903736529Z" level=error msg="ttrpc: received message on inactive stream" stream=91 Apr 16 04:22:13.788323 containerd[1575]: time="2026-04-16T04:22:12.988441487Z" level=error msg="Failed to handle backOff event container_id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" pid:6742 exit_status:1 exited_at:{seconds:1776313274 nanos:836058031} for ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 16 04:22:13.851670 containerd[1575]: time="2026-04-16T04:22:13.797043983Z" level=info msg="TaskExit event container_id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" pid:6740 exit_status:1 exited_at:{seconds:1776313280 nanos:646819312}" Apr 16 04:22:14.976142 containerd[1575]: time="2026-04-16T04:22:14.802220140Z" level=warning msg="container event discarded" container=833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254 type=CONTAINER_STOPPED_EVENT Apr 16 04:22:15.541933 containerd[1575]: time="2026-04-16T04:22:14.843906748Z" level=error msg="ttrpc: received message on inactive stream" stream=89 Apr 16 04:22:17.604582 containerd[1575]: time="2026-04-16T04:22:17.591310623Z" level=warning msg="container event discarded" container=9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9 type=CONTAINER_CREATED_EVENT Apr 16 04:22:23.645412 containerd[1575]: time="2026-04-16T04:22:23.643511412Z" level=error msg="Failed to handle backOff event container_id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" pid:6740 exit_status:1 exited_at:{seconds:1776313280 nanos:646819312} for a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 16 04:22:24.376593 containerd[1575]: time="2026-04-16T04:22:24.150602172Z" level=error msg="ttrpc: received message on inactive stream" stream=89 Apr 16 04:22:24.376593 containerd[1575]: time="2026-04-16T04:22:24.152880868Z" level=error msg="ttrpc: received message on inactive stream" stream=93 Apr 16 04:22:24.376593 containerd[1575]: time="2026-04-16T04:22:24.347554626Z" level=info msg="TaskExit event container_id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" pid:6742 exit_status:1 exited_at:{seconds:1776313274 nanos:836058031}" Apr 16 04:22:25.994729 sshd[8092]: Connection closed by 10.0.0.1 port 35666 Apr 16 04:22:26.270281 sshd-session[8072]: pam_unix(sshd:session): session closed for user core Apr 16 04:22:27.827908 systemd[1]: sshd@106-10.0.0.115:22-10.0.0.1:35666.service: Deactivated successfully. Apr 16 04:22:27.995811 systemd[1]: sshd@106-10.0.0.115:22-10.0.0.1:35666.service: Consumed 2.414s CPU time, 3.2M memory peak. Apr 16 04:22:28.604442 containerd[1575]: time="2026-04-16T04:22:28.099014978Z" level=warning msg="container event discarded" container=9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9 type=CONTAINER_STARTED_EVENT Apr 16 04:22:28.551317 systemd[1]: session-107.scope: Deactivated successfully. Apr 16 04:22:28.607433 systemd[1]: session-107.scope: Consumed 14.210s CPU time, 16.5M memory peak. Apr 16 04:22:29.057754 systemd-logind[1549]: Session 107 logged out. Waiting for processes to exit. Apr 16 04:22:29.206836 systemd-logind[1549]: Removed session 107. Apr 16 04:22:30.760562 containerd[1575]: time="2026-04-16T04:22:30.753504500Z" level=warning msg="container event discarded" container=7b2ec82ae169c492f9b993a65e87ff0dc2a26308a7005643b39a9af1f78a14d8 type=CONTAINER_CREATED_EVENT Apr 16 04:22:33.671777 systemd[1]: Started sshd@107-10.0.0.115:22-10.0.0.1:60686.service - OpenSSH per-connection server daemon (10.0.0.1:60686). Apr 16 04:22:34.654750 containerd[1575]: time="2026-04-16T04:22:34.477756541Z" level=error msg="Failed to handle backOff event container_id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" pid:6742 exit_status:1 exited_at:{seconds:1776313274 nanos:836058031} for ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 16 04:22:35.407534 containerd[1575]: time="2026-04-16T04:22:34.747843842Z" level=info msg="TaskExit event container_id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" pid:6740 exit_status:1 exited_at:{seconds:1776313280 nanos:646819312}" Apr 16 04:22:35.705978 containerd[1575]: time="2026-04-16T04:22:35.669447322Z" level=error msg="ttrpc: received message on inactive stream" stream=97 Apr 16 04:22:35.705978 containerd[1575]: time="2026-04-16T04:22:35.669944618Z" level=error msg="ttrpc: received message on inactive stream" stream=101 Apr 16 04:22:37.241447 kubelet[2980]: E0416 04:22:37.240076 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="33.44s" Apr 16 04:22:38.444066 containerd[1575]: time="2026-04-16T04:22:38.413665793Z" level=warning msg="container event discarded" container=7b2ec82ae169c492f9b993a65e87ff0dc2a26308a7005643b39a9af1f78a14d8 type=CONTAINER_STARTED_EVENT Apr 16 04:22:45.023407 containerd[1575]: time="2026-04-16T04:22:44.995791572Z" level=error msg="Failed to handle backOff event container_id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" pid:6740 exit_status:1 exited_at:{seconds:1776313280 nanos:646819312} for a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 16 04:22:45.678559 containerd[1575]: time="2026-04-16T04:22:45.240341572Z" level=info msg="TaskExit event container_id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" pid:6742 exit_status:1 exited_at:{seconds:1776313274 nanos:836058031}" Apr 16 04:22:45.750425 containerd[1575]: time="2026-04-16T04:22:45.747698334Z" level=error msg="ttrpc: received message on inactive stream" stream=103 Apr 16 04:22:49.659153 kubelet[2980]: I0416 04:22:49.643932 2980 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 16 04:22:50.081426 kubelet[2980]: E0416 04:22:50.080037 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:22:53.878750 containerd[1575]: time="2026-04-16T04:22:53.874446424Z" level=info msg="StopContainer for \"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" with timeout 30 (s)" Apr 16 04:22:55.533331 containerd[1575]: time="2026-04-16T04:22:55.532258258Z" level=error msg="Failed to handle backOff event container_id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" pid:6742 exit_status:1 exited_at:{seconds:1776313274 nanos:836058031} for ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 16 04:22:56.158634 containerd[1575]: time="2026-04-16T04:22:55.712150915Z" level=info msg="TaskExit event container_id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" pid:6740 exit_status:1 exited_at:{seconds:1776313280 nanos:646819312}" Apr 16 04:22:56.706948 containerd[1575]: time="2026-04-16T04:22:56.690857387Z" level=error msg="ttrpc: received message on inactive stream" stream=107 Apr 16 04:22:57.459011 containerd[1575]: time="2026-04-16T04:22:56.868884371Z" level=error msg="ttrpc: received message on inactive stream" stream=111 Apr 16 04:22:57.553539 containerd[1575]: time="2026-04-16T04:22:57.472224580Z" level=error msg="ttrpc: received message on inactive stream" stream=113 Apr 16 04:22:57.796904 containerd[1575]: time="2026-04-16T04:22:57.275939392Z" level=error msg="get state for ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778" error="context deadline exceeded" Apr 16 04:22:57.796904 containerd[1575]: time="2026-04-16T04:22:57.553867629Z" level=warning msg="unknown status" status=0 Apr 16 04:22:57.796904 containerd[1575]: time="2026-04-16T04:22:57.862174156Z" level=info msg="Stop container \"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" with signal terminated" Apr 16 04:23:00.091636 kubelet[2980]: I0416 04:22:59.983070 2980 scope.go:122] "RemoveContainer" containerID="37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6" Apr 16 04:23:01.635072 sshd[8130]: Accepted publickey for core from 10.0.0.1 port 60686 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:23:02.799239 sshd-session[8130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:23:03.073525 kubelet[2980]: E0416 04:23:03.057775 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:23:03.817640 kubelet[2980]: E0416 04:23:03.810866 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 04:23:03.819306 kubelet[2980]: E0416 04:23:03.819276 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="19.471s" Apr 16 04:23:03.872720 systemd-logind[1549]: New session 108 of user core. Apr 16 04:23:03.891265 systemd[1]: Started session-108.scope - Session 108 of User core. Apr 16 04:23:05.071281 kubelet[2980]: E0416 04:23:05.071239 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:23:05.857667 containerd[1575]: time="2026-04-16T04:23:05.790004632Z" level=error msg="Failed to handle backOff event container_id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" pid:6740 exit_status:1 exited_at:{seconds:1776313280 nanos:646819312} for a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 16 04:23:06.422840 containerd[1575]: time="2026-04-16T04:23:06.244711994Z" level=error msg="ttrpc: received message on inactive stream" stream=117 Apr 16 04:23:06.251723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9-rootfs.mount: Deactivated successfully. Apr 16 04:23:06.878438 sshd[8162]: Connection closed by 10.0.0.1 port 60686 Apr 16 04:23:06.883068 sshd-session[8130]: pam_unix(sshd:session): session closed for user core Apr 16 04:23:07.235682 systemd[1]: sshd@107-10.0.0.115:22-10.0.0.1:60686.service: Deactivated successfully. Apr 16 04:23:07.239877 systemd[1]: sshd@107-10.0.0.115:22-10.0.0.1:60686.service: Consumed 6.264s CPU time, 3.2M memory peak. Apr 16 04:23:07.291798 systemd[1]: session-108.scope: Deactivated successfully. Apr 16 04:23:07.345679 systemd-logind[1549]: Session 108 logged out. Waiting for processes to exit. Apr 16 04:23:07.456940 systemd-logind[1549]: Removed session 108. Apr 16 04:23:08.336676 containerd[1575]: time="2026-04-16T04:23:08.321936511Z" level=info msg="StopContainer for \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" with timeout 2 (s)" Apr 16 04:23:08.409070 containerd[1575]: time="2026-04-16T04:23:08.408220588Z" level=info msg="Stop container \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" with signal terminated" Apr 16 04:23:09.233040 systemd[1]: cri-containerd-9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9.scope: Deactivated successfully. Apr 16 04:23:09.272575 systemd[1]: cri-containerd-9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9.scope: Consumed 46.893s CPU time, 288.8M memory peak, 23.5M read from disk, 1.7M written to disk. Apr 16 04:23:09.435571 kubelet[2980]: E0416 04:23:09.433073 2980 kuberuntime_container.go:772] "PreStop hook failed" err="command '/bin/calico-node -shutdown' exited with 137: " pod="calico-system/calico-node-kgtx5" podUID="89b5fbad-4c87-4aac-9951-121c09bbd556" containerName="calico-node" containerID="containerd://9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" Apr 16 04:23:09.437878 containerd[1575]: time="2026-04-16T04:23:09.436154112Z" level=info msg="received container exit event container_id:\"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" id:\"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" pid:7236 exited_at:{seconds:1776313389 nanos:393669332}" Apr 16 04:23:10.531731 containerd[1575]: time="2026-04-16T04:23:10.531281894Z" level=error msg="ExecSync for \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"8e93b810cd35c7a8e2e1826deee9f03e71dfcbe0eff150aba1d00c55d10baacf\": cannot exec in a stopped state" Apr 16 04:23:10.544631 kubelet[2980]: E0416 04:23:10.543783 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"8e93b810cd35c7a8e2e1826deee9f03e71dfcbe0eff150aba1d00c55d10baacf\": cannot exec in a stopped state" containerID="9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:23:11.193544 containerd[1575]: time="2026-04-16T04:23:11.178965362Z" level=info msg="Kill container \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\"" Apr 16 04:23:12.393898 containerd[1575]: time="2026-04-16T04:23:12.203643518Z" level=error msg="ttrpc: failed to handle message" error="context canceled" stream=443 Apr 16 04:23:12.462165 containerd[1575]: time="2026-04-16T04:23:12.216042346Z" level=error msg="StopContainer for \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" failed" error="rpc error: code = Unknown desc = failed to kill container \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\": ttrpc: closed" Apr 16 04:23:12.462165 containerd[1575]: time="2026-04-16T04:23:12.369644759Z" level=info msg="TaskExit event container_id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" id:\"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" pid:6742 exit_status:1 exited_at:{seconds:1776313274 nanos:836058031}" Apr 16 04:23:12.459367 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9-rootfs.mount: Deactivated successfully. Apr 16 04:23:12.462719 kubelet[2980]: E0416 04:23:12.458341 2980 log.go:32] "StopContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to kill container \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\": ttrpc: closed" containerID="9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" Apr 16 04:23:12.462719 kubelet[2980]: E0416 04:23:12.458640 2980 kuberuntime_container.go:895] "Container termination failed with gracePeriod" err="rpc error: code = Unknown desc = failed to kill container \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\": ttrpc: closed" pod="calico-system/calico-node-kgtx5" podUID="89b5fbad-4c87-4aac-9951-121c09bbd556" containerName="calico-node" containerID="containerd://9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" gracePeriod=2 Apr 16 04:23:12.462719 kubelet[2980]: E0416 04:23:12.458673 2980 kuberuntime_manager.go:1437] "killContainer for pod failed" err="rpc error: code = Unknown desc = failed to kill container \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\": ttrpc: closed" containerName="calico-node" containerID={"Type":"containerd","ID":"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9"} pod="calico-system/calico-node-kgtx5" Apr 16 04:23:12.462719 kubelet[2980]: E0416 04:23:12.458809 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillContainer\" for \"calico-node\" with KillContainerError: \"rpc error: code = Unknown desc = failed to kill container \\\"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\\\": ttrpc: closed\"" pod="calico-system/calico-node-kgtx5" podUID="89b5fbad-4c87-4aac-9951-121c09bbd556" Apr 16 04:23:12.516398 containerd[1575]: time="2026-04-16T04:23:12.510629951Z" level=error msg="ExecSync for \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: not found" Apr 16 04:23:12.800777 kubelet[2980]: E0416 04:23:12.766032 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: not found" containerID="9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:23:13.270359 containerd[1575]: time="2026-04-16T04:23:13.267278386Z" level=error msg="ExecSync for \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 16 04:23:13.432032 systemd[1]: Started sshd@108-10.0.0.115:22-10.0.0.1:44632.service - OpenSSH per-connection server daemon (10.0.0.1:44632). Apr 16 04:23:13.449611 kubelet[2980]: E0416 04:23:13.435011 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:23:13.451926 containerd[1575]: time="2026-04-16T04:23:13.451873095Z" level=error msg="ExecSync for \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 16 04:23:13.588491 kubelet[2980]: E0416 04:23:13.536511 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:23:13.755176 kubelet[2980]: E0416 04:23:13.746857 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.178s" Apr 16 04:23:13.782718 containerd[1575]: time="2026-04-16T04:23:13.770256888Z" level=error msg="ExecSync for \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 16 04:23:13.798983 kubelet[2980]: E0416 04:23:13.790838 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:23:13.819843 containerd[1575]: time="2026-04-16T04:23:13.819771389Z" level=error msg="ExecSync for \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Apr 16 04:23:13.849949 kubelet[2980]: E0416 04:23:13.838834 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:23:14.318390 kubelet[2980]: I0416 04:23:14.286848 2980 scope.go:122] "RemoveContainer" containerID="833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254" Apr 16 04:23:14.550989 kubelet[2980]: I0416 04:23:14.549377 2980 scope.go:122] "RemoveContainer" containerID="9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" Apr 16 04:23:14.903657 kubelet[2980]: E0416 04:23:14.903583 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-kgtx5_calico-system(89b5fbad-4c87-4aac-9951-121c09bbd556)\"" pod="calico-system/calico-node-kgtx5" podUID="89b5fbad-4c87-4aac-9951-121c09bbd556" Apr 16 04:23:15.189965 containerd[1575]: time="2026-04-16T04:23:15.176195026Z" level=info msg="RemoveContainer for \"833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254\"" Apr 16 04:23:15.767023 kubelet[2980]: I0416 04:23:15.766702 2980 scope.go:122] "RemoveContainer" containerID="9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" Apr 16 04:23:15.921717 kubelet[2980]: E0416 04:23:15.904476 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-kgtx5_calico-system(89b5fbad-4c87-4aac-9951-121c09bbd556)\"" pod="calico-system/calico-node-kgtx5" podUID="89b5fbad-4c87-4aac-9951-121c09bbd556" Apr 16 04:23:15.988690 containerd[1575]: time="2026-04-16T04:23:15.988614045Z" level=info msg="RemoveContainer for \"833de9346f5e97ef61cbd34e90f93b0d9f2c186a9fe754b05f36c45e64b14254\" returns successfully" Apr 16 04:23:16.351594 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778-rootfs.mount: Deactivated successfully. Apr 16 04:23:16.743030 sshd[8257]: Accepted publickey for core from 10.0.0.1 port 44632 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:23:16.748746 sshd-session[8257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:23:16.811373 containerd[1575]: time="2026-04-16T04:23:16.810979746Z" level=info msg="StopContainer for \"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" returns successfully" Apr 16 04:23:17.035965 kubelet[2980]: E0416 04:23:17.024236 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:23:17.231770 systemd-logind[1549]: New session 109 of user core. Apr 16 04:23:17.250375 containerd[1575]: time="2026-04-16T04:23:17.249185155Z" level=info msg="CreateContainer within sandbox \"b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:6,}" Apr 16 04:23:17.250846 systemd[1]: Started session-109.scope - Session 109 of User core. Apr 16 04:23:17.385934 kubelet[2980]: I0416 04:23:17.379074 2980 scope.go:122] "RemoveContainer" containerID="0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412" Apr 16 04:23:17.420238 containerd[1575]: time="2026-04-16T04:23:17.420001058Z" level=info msg="RemoveContainer for \"0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412\"" Apr 16 04:23:17.530229 containerd[1575]: time="2026-04-16T04:23:17.529655454Z" level=info msg="Container 2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:23:17.707887 containerd[1575]: time="2026-04-16T04:23:17.698410811Z" level=info msg="RemoveContainer for \"0df13c4ece9346f198d441e2d5f901aff25a132eb40e4b60df568cbf5c317412\" returns successfully" Apr 16 04:23:17.804595 containerd[1575]: time="2026-04-16T04:23:17.803948077Z" level=info msg="CreateContainer within sandbox \"b7261e07692cf1c264b8264ed0ad00541dfc16ab99cc26f9b0a9395a571d2c8f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:6,} returns container id \"2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4\"" Apr 16 04:23:17.983705 containerd[1575]: time="2026-04-16T04:23:17.925896906Z" level=info msg="StartContainer for \"2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4\"" Apr 16 04:23:17.988258 containerd[1575]: time="2026-04-16T04:23:17.986431015Z" level=info msg="connecting to shim 2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4" address="unix:///run/containerd/s/64fc1d346c666396b4a6f4eda52f8f58d8abeacdc8da519fac54d1b45f3029a3" protocol=ttrpc version=3 Apr 16 04:23:20.080292 kubelet[2980]: E0416 04:23:20.072511 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.409s" Apr 16 04:23:22.103721 kubelet[2980]: I0416 04:23:22.086938 2980 scope.go:122] "RemoveContainer" containerID="9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" Apr 16 04:23:22.653941 containerd[1575]: time="2026-04-16T04:23:22.647818909Z" level=info msg="TaskExit event container_id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" id:\"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" pid:6740 exit_status:1 exited_at:{seconds:1776313280 nanos:646819312}" Apr 16 04:23:23.474738 kubelet[2980]: E0416 04:23:23.343887 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.463s" Apr 16 04:23:27.479774 kubelet[2980]: E0416 04:23:27.474419 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4s" Apr 16 04:23:28.730140 sshd[8273]: Connection closed by 10.0.0.1 port 44632 Apr 16 04:23:28.739884 systemd[1]: Started cri-containerd-2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4.scope - libcontainer container 2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4. Apr 16 04:23:28.752686 sshd-session[8257]: pam_unix(sshd:session): session closed for user core Apr 16 04:23:28.898656 containerd[1575]: time="2026-04-16T04:23:28.896959978Z" level=info msg="CreateContainer within sandbox \"d274b66e3a7d173ca74ea653fd497e7a000bb57082609d9eaa03b24e2dc7ade0\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" Apr 16 04:23:29.007638 kubelet[2980]: E0416 04:23:28.988231 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.426s" Apr 16 04:23:28.990579 systemd[1]: sshd@108-10.0.0.115:22-10.0.0.1:44632.service: Deactivated successfully. Apr 16 04:23:29.044573 systemd[1]: session-109.scope: Deactivated successfully. Apr 16 04:23:29.045320 systemd[1]: session-109.scope: Consumed 5.004s CPU time, 14.9M memory peak. Apr 16 04:23:29.052251 systemd-logind[1549]: Session 109 logged out. Waiting for processes to exit. Apr 16 04:23:29.105075 systemd-logind[1549]: Removed session 109. Apr 16 04:23:29.722520 containerd[1575]: time="2026-04-16T04:23:29.713250484Z" level=info msg="Container 839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:23:30.262965 containerd[1575]: time="2026-04-16T04:23:30.262747137Z" level=info msg="CreateContainer within sandbox \"d274b66e3a7d173ca74ea653fd497e7a000bb57082609d9eaa03b24e2dc7ade0\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\"" Apr 16 04:23:30.321365 containerd[1575]: time="2026-04-16T04:23:30.313155980Z" level=info msg="StartContainer for \"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\"" Apr 16 04:23:30.344474 containerd[1575]: time="2026-04-16T04:23:30.342377469Z" level=info msg="connecting to shim 839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" address="unix:///run/containerd/s/aeabc3715f557963c617c8591f62e432aca8901fc2a59ed43a1f9f47d5f9452d" protocol=ttrpc version=3 Apr 16 04:23:30.894460 containerd[1575]: time="2026-04-16T04:23:30.887943551Z" level=error msg="collecting metrics for a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9" error="ttrpc: closed" Apr 16 04:23:31.072914 systemd[1]: Started cri-containerd-839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd.scope - libcontainer container 839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd. Apr 16 04:23:31.330684 containerd[1575]: time="2026-04-16T04:23:31.329282644Z" level=info msg="StartContainer for \"2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4\" returns successfully" Apr 16 04:23:32.425622 kubelet[2980]: E0416 04:23:32.424962 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:23:32.459234 kubelet[2980]: I0416 04:23:32.446997 2980 scope.go:122] "RemoveContainer" containerID="23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b" Apr 16 04:23:32.518382 kubelet[2980]: I0416 04:23:32.489661 2980 scope.go:122] "RemoveContainer" containerID="a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9" Apr 16 04:23:32.570256 kubelet[2980]: E0416 04:23:32.548574 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-mwc4j_tigera-operator(1fd5a14c-9f90-43e3-abf1-9685462b990b)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-mwc4j" podUID="1fd5a14c-9f90-43e3-abf1-9685462b990b" Apr 16 04:23:32.869014 containerd[1575]: time="2026-04-16T04:23:32.851019463Z" level=info msg="RemoveContainer for \"23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b\"" Apr 16 04:23:33.098811 containerd[1575]: time="2026-04-16T04:23:33.098306381Z" level=error msg="get state for 839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" error="context deadline exceeded" Apr 16 04:23:33.114015 containerd[1575]: time="2026-04-16T04:23:33.113743934Z" level=warning msg="unknown status" status=0 Apr 16 04:23:33.375196 containerd[1575]: time="2026-04-16T04:23:33.321478309Z" level=info msg="RemoveContainer for \"23c15d5a1046de9e00c84ac9d12d5bb738bbba7b1188cd048d69bd73620d4f1b\" returns successfully" Apr 16 04:23:33.602603 kubelet[2980]: E0416 04:23:33.598009 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:23:33.900683 containerd[1575]: time="2026-04-16T04:23:33.890579388Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 16 04:23:34.138264 systemd[1]: Started sshd@109-10.0.0.115:22-10.0.0.1:36576.service - OpenSSH per-connection server daemon (10.0.0.1:36576). Apr 16 04:23:36.056326 kubelet[2980]: E0416 04:23:36.052811 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.443s" Apr 16 04:23:36.348055 kubelet[2980]: E0416 04:23:36.159695 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:23:36.589811 kubelet[2980]: I0416 04:23:36.582043 2980 scope.go:122] "RemoveContainer" containerID="9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9" Apr 16 04:23:43.157702 containerd[1575]: time="2026-04-16T04:23:43.155018222Z" level=info msg="RemoveContainer for \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\"" Apr 16 04:23:43.430818 containerd[1575]: time="2026-04-16T04:23:43.419771531Z" level=info msg="RemoveContainer for \"9de1af9cbecb14680dfed0919f22c80f8916b0efda5acc862537273d0f5d51a9\" returns successfully" Apr 16 04:23:43.467198 kubelet[2980]: E0416 04:23:43.449948 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.557s" Apr 16 04:23:43.550038 containerd[1575]: time="2026-04-16T04:23:43.549379912Z" level=info msg="StartContainer for \"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" returns successfully" Apr 16 04:23:43.765685 sshd[8358]: Accepted publickey for core from 10.0.0.1 port 36576 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:23:43.800956 sshd-session[8358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:23:44.165390 systemd-logind[1549]: New session 110 of user core. Apr 16 04:23:44.239567 systemd[1]: Started session-110.scope - Session 110 of User core. Apr 16 04:23:46.492167 kubelet[2980]: E0416 04:23:46.490625 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.904s" Apr 16 04:23:46.857525 kubelet[2980]: E0416 04:23:46.831610 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:23:49.590947 kubelet[2980]: E0416 04:23:49.569937 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:23:51.783235 kubelet[2980]: E0416 04:23:51.775323 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.047s" Apr 16 04:23:52.137080 sshd[8378]: Connection closed by 10.0.0.1 port 36576 Apr 16 04:23:52.587922 sshd-session[8358]: pam_unix(sshd:session): session closed for user core Apr 16 04:23:53.594627 systemd[1]: sshd@109-10.0.0.115:22-10.0.0.1:36576.service: Deactivated successfully. Apr 16 04:23:53.742575 systemd[1]: sshd@109-10.0.0.115:22-10.0.0.1:36576.service: Consumed 2.152s CPU time, 3.2M memory peak. Apr 16 04:23:54.336327 kubelet[2980]: E0416 04:23:53.974932 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.1s" Apr 16 04:23:54.336581 systemd[1]: session-110.scope: Deactivated successfully. Apr 16 04:23:54.350616 systemd[1]: session-110.scope: Consumed 2.260s CPU time, 16M memory peak. Apr 16 04:23:54.478914 systemd-logind[1549]: Session 110 logged out. Waiting for processes to exit. Apr 16 04:23:54.540424 systemd-logind[1549]: Removed session 110. Apr 16 04:23:57.447024 systemd[1]: Started sshd@110-10.0.0.115:22-10.0.0.1:58824.service - OpenSSH per-connection server daemon (10.0.0.1:58824). Apr 16 04:23:59.884679 sshd[8398]: Accepted publickey for core from 10.0.0.1 port 58824 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:24:00.114527 sshd-session[8398]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:24:00.815421 systemd-logind[1549]: New session 111 of user core. Apr 16 04:24:01.013070 systemd[1]: Started session-111.scope - Session 111 of User core. Apr 16 04:24:04.480689 sshd[8402]: Connection closed by 10.0.0.1 port 58824 Apr 16 04:24:04.524257 sshd-session[8398]: pam_unix(sshd:session): session closed for user core Apr 16 04:24:04.843862 systemd[1]: sshd@110-10.0.0.115:22-10.0.0.1:58824.service: Deactivated successfully. Apr 16 04:24:04.929509 systemd[1]: session-111.scope: Deactivated successfully. Apr 16 04:24:04.930558 systemd[1]: session-111.scope: Consumed 1.280s CPU time, 16.2M memory peak. Apr 16 04:24:05.095760 systemd-logind[1549]: Session 111 logged out. Waiting for processes to exit. Apr 16 04:24:05.174607 systemd-logind[1549]: Removed session 111. Apr 16 04:24:09.633298 systemd[1]: Started sshd@111-10.0.0.115:22-10.0.0.1:40498.service - OpenSSH per-connection server daemon (10.0.0.1:40498). Apr 16 04:24:11.476232 sshd[8427]: Accepted publickey for core from 10.0.0.1 port 40498 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:24:11.548508 sshd-session[8427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:24:11.632894 kubelet[2980]: I0416 04:24:11.628621 2980 scope.go:122] "RemoveContainer" containerID="37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6" Apr 16 04:24:11.640911 kubelet[2980]: E0416 04:24:11.639203 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:24:11.640911 kubelet[2980]: E0416 04:24:11.639838 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 04:24:11.907147 systemd-logind[1549]: New session 112 of user core. Apr 16 04:24:11.960724 systemd[1]: Started session-112.scope - Session 112 of User core. Apr 16 04:24:14.668078 sshd[8435]: Connection closed by 10.0.0.1 port 40498 Apr 16 04:24:14.667550 sshd-session[8427]: pam_unix(sshd:session): session closed for user core Apr 16 04:24:14.845410 systemd[1]: sshd@111-10.0.0.115:22-10.0.0.1:40498.service: Deactivated successfully. Apr 16 04:24:14.896178 systemd[1]: session-112.scope: Deactivated successfully. Apr 16 04:24:14.964747 systemd-logind[1549]: Session 112 logged out. Waiting for processes to exit. Apr 16 04:24:15.026235 systemd-logind[1549]: Removed session 112. Apr 16 04:24:15.609781 kubelet[2980]: E0416 04:24:15.609379 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:24:19.754411 systemd[1]: Started sshd@112-10.0.0.115:22-10.0.0.1:52418.service - OpenSSH per-connection server daemon (10.0.0.1:52418). Apr 16 04:24:20.832175 sshd[8596]: Accepted publickey for core from 10.0.0.1 port 52418 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:24:20.897653 sshd-session[8596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:24:21.327185 systemd-logind[1549]: New session 113 of user core. Apr 16 04:24:21.337152 systemd[1]: Started session-113.scope - Session 113 of User core. Apr 16 04:24:21.640632 kubelet[2980]: E0416 04:24:21.619454 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:24:23.608477 kubelet[2980]: E0416 04:24:23.608390 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.024s" Apr 16 04:24:34.080469 kubelet[2980]: E0416 04:24:34.079699 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.404s" Apr 16 04:24:34.384057 sshd[8620]: Connection closed by 10.0.0.1 port 52418 Apr 16 04:24:34.384954 sshd-session[8596]: pam_unix(sshd:session): session closed for user core Apr 16 04:24:34.996751 systemd[1]: sshd@112-10.0.0.115:22-10.0.0.1:52418.service: Deactivated successfully. Apr 16 04:24:35.747776 systemd[1]: session-113.scope: Deactivated successfully. Apr 16 04:24:35.877024 systemd[1]: session-113.scope: Consumed 4.639s CPU time, 16.1M memory peak. Apr 16 04:24:36.112903 systemd[1]: cri-containerd-2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4.scope: Deactivated successfully. Apr 16 04:24:36.838493 containerd[1575]: time="2026-04-16T04:24:36.589891957Z" level=info msg="received container exit event container_id:\"2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4\" id:\"2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4\" pid:8308 exit_status:1 exited_at:{seconds:1776313476 nanos:584041417}" Apr 16 04:24:36.838493 containerd[1575]: time="2026-04-16T04:24:36.259497161Z" level=error msg="unable to parse PSI data: read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice/cri-containerd-2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4.scope/cpu.pressure: no such device" Apr 16 04:24:36.263774 systemd[1]: cri-containerd-2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4.scope: Consumed 8.436s CPU time, 29.8M memory peak, 8.9M read from disk. Apr 16 04:24:36.849641 systemd-logind[1549]: Session 113 logged out. Waiting for processes to exit. Apr 16 04:24:37.243495 systemd-logind[1549]: Removed session 113. Apr 16 04:24:38.057854 kubelet[2980]: E0416 04:24:37.838883 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.993s" Apr 16 04:24:41.103833 systemd[1]: Started sshd@113-10.0.0.115:22-10.0.0.1:55784.service - OpenSSH per-connection server daemon (10.0.0.1:55784). Apr 16 04:24:43.390325 kubelet[2980]: E0416 04:24:43.389637 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.106s" Apr 16 04:24:47.343732 containerd[1575]: time="2026-04-16T04:24:47.310981839Z" level=error msg="failed to handle container TaskExit event container_id:\"2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4\" id:\"2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4\" pid:8308 exit_status:1 exited_at:{seconds:1776313476 nanos:584041417}" error="failed to stop container: failed to delete task: context deadline exceeded" Apr 16 04:24:48.667367 containerd[1575]: time="2026-04-16T04:24:48.658484118Z" level=error msg="ttrpc: received message on inactive stream" stream=43 Apr 16 04:24:48.676352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4-rootfs.mount: Deactivated successfully. Apr 16 04:24:49.036572 containerd[1575]: time="2026-04-16T04:24:48.767011337Z" level=info msg="TaskExit event container_id:\"2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4\" id:\"2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4\" pid:8308 exit_status:1 exited_at:{seconds:1776313476 nanos:584041417}" Apr 16 04:24:58.692588 containerd[1575]: time="2026-04-16T04:24:58.672037788Z" level=error msg="Failed to handle backOff event container_id:\"2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4\" id:\"2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4\" pid:8308 exit_status:1 exited_at:{seconds:1776313476 nanos:584041417} for 2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 16 04:24:59.426807 containerd[1575]: time="2026-04-16T04:24:59.425921139Z" level=error msg="ttrpc: received message on inactive stream" stream=49 Apr 16 04:24:59.426807 containerd[1575]: time="2026-04-16T04:24:59.426696224Z" level=error msg="ttrpc: received message on inactive stream" stream=53 Apr 16 04:25:01.222496 containerd[1575]: time="2026-04-16T04:25:01.220518381Z" level=info msg="TaskExit event container_id:\"2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4\" id:\"2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4\" pid:8308 exit_status:1 exited_at:{seconds:1776313476 nanos:584041417}" Apr 16 04:25:01.717607 sshd[8658]: Accepted publickey for core from 10.0.0.1 port 55784 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:25:01.999244 sshd-session[8658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:25:02.879393 kubelet[2980]: E0416 04:25:02.564861 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="18.99s" Apr 16 04:25:03.358053 kubelet[2980]: E0416 04:25:03.274910 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:25:03.497919 systemd-logind[1549]: New session 114 of user core. Apr 16 04:25:04.214453 systemd[1]: Started session-114.scope - Session 114 of User core. Apr 16 04:25:04.801671 kubelet[2980]: I0416 04:25:04.677711 2980 scope.go:122] "RemoveContainer" containerID="a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9" Apr 16 04:25:05.680817 kubelet[2980]: E0416 04:25:05.677527 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=tigera-operator pod=tigera-operator-6cf4cccc57-mwc4j_tigera-operator(1fd5a14c-9f90-43e3-abf1-9685462b990b)\"" pod="tigera-operator/tigera-operator-6cf4cccc57-mwc4j" podUID="1fd5a14c-9f90-43e3-abf1-9685462b990b" Apr 16 04:25:11.355969 containerd[1575]: time="2026-04-16T04:25:11.311135418Z" level=error msg="get state for 2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4" error="context deadline exceeded" Apr 16 04:25:12.213223 containerd[1575]: time="2026-04-16T04:25:11.311203607Z" level=error msg="ttrpc: received message on inactive stream" stream=65 Apr 16 04:25:12.213223 containerd[1575]: time="2026-04-16T04:25:11.647985999Z" level=warning msg="unknown status" status=0 Apr 16 04:25:13.464473 containerd[1575]: time="2026-04-16T04:25:13.279557483Z" level=error msg="failed to drain init process 2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4 io" error="context deadline exceeded" runtime=io.containerd.runc.v2 Apr 16 04:25:13.796337 containerd[1575]: time="2026-04-16T04:25:13.601488360Z" level=error msg="Failed to handle backOff event container_id:\"2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4\" id:\"2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4\" pid:8308 exit_status:1 exited_at:{seconds:1776313476 nanos:584041417} for 2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4" error="failed to handle container TaskExit event: failed to stop container: failed to delete task: context deadline exceeded" Apr 16 04:25:14.789409 containerd[1575]: time="2026-04-16T04:25:14.720908297Z" level=error msg="ExecSync for \"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 16 04:25:15.128867 containerd[1575]: time="2026-04-16T04:25:14.725023489Z" level=error msg="ttrpc: received message on inactive stream" stream=67 Apr 16 04:25:15.883477 kubelet[2980]: E0416 04:25:15.878723 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.839s" Apr 16 04:25:16.754148 kubelet[2980]: E0416 04:25:16.752179 2980 kubelet_node_status.go:474] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:25:02Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:25:03Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:25:03Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-04-16T04:25:03Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://10.0.0.115:6443/api/v1/nodes/localhost/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 16 04:25:18.202359 kubelet[2980]: E0416 04:25:17.007425 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:25:19.673912 containerd[1575]: time="2026-04-16T04:25:18.187748111Z" level=info msg="TaskExit event container_id:\"2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4\" id:\"2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4\" pid:8308 exit_status:1 exited_at:{seconds:1776313476 nanos:584041417}" Apr 16 04:25:22.153673 kubelet[2980]: E0416 04:25:22.153254 2980 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Apr 16 04:25:23.555675 kubelet[2980]: E0416 04:25:23.542989 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.497s" Apr 16 04:25:24.613080 kubelet[2980]: E0416 04:25:24.607837 2980 controller.go:251] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 16 04:25:24.955286 kubelet[2980]: E0416 04:25:24.954781 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.362s" Apr 16 04:25:25.255821 sshd[8678]: Connection closed by 10.0.0.1 port 55784 Apr 16 04:25:25.301778 sshd-session[8658]: pam_unix(sshd:session): session closed for user core Apr 16 04:25:25.940736 kubelet[2980]: I0416 04:25:25.938719 2980 scope.go:122] "RemoveContainer" containerID="37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6" Apr 16 04:25:26.205570 kubelet[2980]: E0416 04:25:26.129832 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:25:26.140787 systemd[1]: sshd@113-10.0.0.115:22-10.0.0.1:55784.service: Deactivated successfully. Apr 16 04:25:26.142340 systemd[1]: sshd@113-10.0.0.115:22-10.0.0.1:55784.service: Consumed 5.043s CPU time, 3.2M memory peak. Apr 16 04:25:26.251895 systemd[1]: session-114.scope: Deactivated successfully. Apr 16 04:25:26.256021 systemd[1]: session-114.scope: Consumed 8.250s CPU time, 16.4M memory peak. Apr 16 04:25:26.358983 systemd-logind[1549]: Session 114 logged out. Waiting for processes to exit. Apr 16 04:25:26.361653 systemd-logind[1549]: Removed session 114. Apr 16 04:25:26.370872 kubelet[2980]: E0416 04:25:26.370834 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-controller-manager pod=kube-controller-manager-localhost_kube-system(14bc29ec35edba17af38052ec24275f2)\"" pod="kube-system/kube-controller-manager-localhost" podUID="14bc29ec35edba17af38052ec24275f2" Apr 16 04:25:27.665781 kubelet[2980]: I0416 04:25:27.641308 2980 scope.go:122] "RemoveContainer" containerID="ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778" Apr 16 04:25:27.699979 kubelet[2980]: I0416 04:25:27.668878 2980 scope.go:122] "RemoveContainer" containerID="2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4" Apr 16 04:25:27.699979 kubelet[2980]: E0416 04:25:27.699169 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:25:27.711402 kubelet[2980]: E0416 04:25:27.700386 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 16 04:25:27.847450 containerd[1575]: time="2026-04-16T04:25:27.846926109Z" level=info msg="RemoveContainer for \"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\"" Apr 16 04:25:27.913060 containerd[1575]: time="2026-04-16T04:25:27.912892967Z" level=info msg="RemoveContainer for \"ea76845fc624f2e09b1f24508ebde0755d82985748c5c62b1d74880d8c26a778\" returns successfully" Apr 16 04:25:30.058644 kubelet[2980]: E0416 04:25:30.041285 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:25:30.253751 kubelet[2980]: I0416 04:25:30.065594 2980 scope.go:122] "RemoveContainer" containerID="2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4" Apr 16 04:25:30.253751 kubelet[2980]: E0416 04:25:30.251937 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:25:30.327068 kubelet[2980]: E0416 04:25:30.325708 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 16 04:25:30.410033 systemd[1]: Started sshd@114-10.0.0.115:22-10.0.0.1:33644.service - OpenSSH per-connection server daemon (10.0.0.1:33644). Apr 16 04:25:31.909701 sshd[8745]: Accepted publickey for core from 10.0.0.1 port 33644 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:25:31.939697 sshd-session[8745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:25:32.138321 systemd-logind[1549]: New session 115 of user core. Apr 16 04:25:32.297811 systemd[1]: Started session-115.scope - Session 115 of User core. Apr 16 04:25:37.680586 sshd[8777]: Connection closed by 10.0.0.1 port 33644 Apr 16 04:25:37.714573 sshd-session[8745]: pam_unix(sshd:session): session closed for user core Apr 16 04:25:38.141132 systemd[1]: sshd@114-10.0.0.115:22-10.0.0.1:33644.service: Deactivated successfully. Apr 16 04:25:38.263927 systemd[1]: session-115.scope: Deactivated successfully. Apr 16 04:25:38.286763 systemd[1]: session-115.scope: Consumed 2.003s CPU time, 17.9M memory peak. Apr 16 04:25:38.564615 systemd-logind[1549]: Session 115 logged out. Waiting for processes to exit. Apr 16 04:25:38.727532 systemd-logind[1549]: Removed session 115. Apr 16 04:25:41.591168 kubelet[2980]: E0416 04:25:41.590928 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.005s" Apr 16 04:25:44.075177 systemd[1]: Started sshd@115-10.0.0.115:22-10.0.0.1:49712.service - OpenSSH per-connection server daemon (10.0.0.1:49712). Apr 16 04:25:44.678737 containerd[1575]: time="2026-04-16T04:25:44.677999531Z" level=error msg="ExecSync for \"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 16 04:25:44.681469 kubelet[2980]: E0416 04:25:44.681163 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:25:51.322056 sshd[8821]: Accepted publickey for core from 10.0.0.1 port 49712 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:25:51.531760 kubelet[2980]: E0416 04:25:51.352737 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.505s" Apr 16 04:25:51.775273 sshd-session[8821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:25:55.339673 systemd-logind[1549]: New session 116 of user core. Apr 16 04:25:55.784633 systemd[1]: Started session-116.scope - Session 116 of User core. Apr 16 04:25:58.880456 kubelet[2980]: E0416 04:25:58.873445 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.517s" Apr 16 04:25:59.298779 kubelet[2980]: E0416 04:25:59.284616 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:26:03.971429 kubelet[2980]: E0416 04:26:03.896856 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.819s" Apr 16 04:26:06.799720 sshd[8841]: Connection closed by 10.0.0.1 port 49712 Apr 16 04:26:06.894649 sshd-session[8821]: pam_unix(sshd:session): session closed for user core Apr 16 04:26:07.604882 systemd[1]: sshd@115-10.0.0.115:22-10.0.0.1:49712.service: Deactivated successfully. Apr 16 04:26:07.699143 systemd[1]: sshd@115-10.0.0.115:22-10.0.0.1:49712.service: Consumed 1.630s CPU time, 3.2M memory peak. Apr 16 04:26:08.093616 systemd[1]: session-116.scope: Deactivated successfully. Apr 16 04:26:08.171625 systemd[1]: session-116.scope: Consumed 4.575s CPU time, 15.9M memory peak. Apr 16 04:26:08.566699 systemd-logind[1549]: Session 116 logged out. Waiting for processes to exit. Apr 16 04:26:09.250923 systemd-logind[1549]: Removed session 116. Apr 16 04:26:09.732080 kubelet[2980]: E0416 04:26:09.713633 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.128s" Apr 16 04:26:11.761742 kubelet[2980]: E0416 04:26:11.754562 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.726s" Apr 16 04:26:13.039144 kubelet[2980]: E0416 04:26:13.037504 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.275s" Apr 16 04:26:13.169696 systemd[1]: Started sshd@116-10.0.0.115:22-10.0.0.1:45786.service - OpenSSH per-connection server daemon (10.0.0.1:45786). Apr 16 04:26:14.655049 kubelet[2980]: E0416 04:26:14.645785 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.607s" Apr 16 04:26:15.801732 containerd[1575]: time="2026-04-16T04:26:15.801036206Z" level=info msg="StopContainer for \"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" with timeout 2 (s)" Apr 16 04:26:16.358642 containerd[1575]: time="2026-04-16T04:26:16.309382633Z" level=info msg="Stop container \"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" with signal terminated" Apr 16 04:26:19.898268 kubelet[2980]: E0416 04:26:19.850776 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.203s" Apr 16 04:26:23.656542 kubelet[2980]: E0416 04:26:23.649312 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.79s" Apr 16 04:26:24.146716 sshd[8865]: Accepted publickey for core from 10.0.0.1 port 45786 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:26:24.713729 sshd-session[8865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:26:25.565021 kubelet[2980]: E0416 04:26:25.551952 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.857s" Apr 16 04:26:26.338585 systemd-logind[1549]: New session 117 of user core. Apr 16 04:26:26.526229 systemd[1]: Started session-117.scope - Session 117 of User core. Apr 16 04:26:26.721431 kubelet[2980]: I0416 04:26:26.543805 2980 scope.go:122] "RemoveContainer" containerID="a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9" Apr 16 04:26:27.087059 kubelet[2980]: E0416 04:26:27.075396 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.314s" Apr 16 04:26:28.009430 kubelet[2980]: E0416 04:26:28.004589 2980 kuberuntime_manager.go:1664] "Unhandled Error" err="container tigera-operator start failed in pod tigera-operator-6cf4cccc57-mwc4j_tigera-operator(1fd5a14c-9f90-43e3-abf1-9685462b990b): CreateContainerConfigError: failed to sync configmap cache: timed out waiting for the condition" logger="UnhandledError" Apr 16 04:26:28.308326 kubelet[2980]: E0416 04:26:28.010019 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CreateContainerConfigError: \"failed to sync configmap cache: timed out waiting for the condition\"" pod="tigera-operator/tigera-operator-6cf4cccc57-mwc4j" podUID="1fd5a14c-9f90-43e3-abf1-9685462b990b" Apr 16 04:26:30.597996 kubelet[2980]: E0416 04:26:30.464446 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.881s" Apr 16 04:26:31.204316 kubelet[2980]: I0416 04:26:31.201898 2980 scope.go:122] "RemoveContainer" containerID="37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6" Apr 16 04:26:32.291922 kubelet[2980]: E0416 04:26:32.286257 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:26:33.482795 kubelet[2980]: E0416 04:26:33.473193 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.811s" Apr 16 04:26:33.884819 containerd[1575]: time="2026-04-16T04:26:33.753438545Z" level=info msg="Kill container \"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\"" Apr 16 04:26:34.272672 kubelet[2980]: I0416 04:26:33.921899 2980 scope.go:122] "RemoveContainer" containerID="2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4" Apr 16 04:26:34.272672 kubelet[2980]: E0416 04:26:33.927461 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:26:34.272672 kubelet[2980]: E0416 04:26:33.972180 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 16 04:26:35.244456 sshd[8900]: Connection closed by 10.0.0.1 port 45786 Apr 16 04:26:35.612922 sshd-session[8865]: pam_unix(sshd:session): session closed for user core Apr 16 04:26:35.935487 systemd[1]: cri-containerd-839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd.scope: Deactivated successfully. Apr 16 04:26:36.090017 systemd[1]: cri-containerd-839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd.scope: Consumed 36.151s CPU time, 299.3M memory peak, 10.7M read from disk, 1012K written to disk. Apr 16 04:26:36.614599 systemd[1]: sshd@116-10.0.0.115:22-10.0.0.1:45786.service: Deactivated successfully. Apr 16 04:26:36.618134 systemd[1]: sshd@116-10.0.0.115:22-10.0.0.1:45786.service: Consumed 2.832s CPU time, 3.3M memory peak. Apr 16 04:26:36.976663 systemd[1]: session-117.scope: Deactivated successfully. Apr 16 04:26:37.029952 systemd[1]: session-117.scope: Consumed 2.906s CPU time, 16.4M memory peak. Apr 16 04:26:37.309993 systemd-logind[1549]: Session 117 logged out. Waiting for processes to exit. Apr 16 04:26:38.056667 systemd-logind[1549]: Removed session 117. Apr 16 04:26:38.958919 containerd[1575]: time="2026-04-16T04:26:38.945967795Z" level=info msg="CreateContainer within sandbox \"02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:8,}" Apr 16 04:26:39.588517 kubelet[2980]: E0416 04:26:39.576343 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.883s" Apr 16 04:26:41.300900 containerd[1575]: time="2026-04-16T04:26:41.299669061Z" level=warning msg="container event discarded" container=37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6 type=CONTAINER_STOPPED_EVENT Apr 16 04:26:42.058803 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1998077077.mount: Deactivated successfully. Apr 16 04:26:42.434432 containerd[1575]: time="2026-04-16T04:26:42.078837161Z" level=info msg="Container d91b55c4f4b6be64d8c03e922df43701c422a8cf684ef72e31b5aebf1fd170a7: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:26:42.912986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1231110515.mount: Deactivated successfully. Apr 16 04:26:43.123879 kubelet[2980]: E0416 04:26:43.011849 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.231s" Apr 16 04:26:43.489347 systemd[1]: Started sshd@117-10.0.0.115:22-10.0.0.1:52460.service - OpenSSH per-connection server daemon (10.0.0.1:52460). Apr 16 04:26:43.878930 kubelet[2980]: I0416 04:26:43.833680 2980 scope.go:122] "RemoveContainer" containerID="a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9" Apr 16 04:26:44.537259 containerd[1575]: time="2026-04-16T04:26:44.536218732Z" level=info msg="CreateContainer within sandbox \"02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:8,} returns container id \"d91b55c4f4b6be64d8c03e922df43701c422a8cf684ef72e31b5aebf1fd170a7\"" Apr 16 04:26:44.666365 containerd[1575]: time="2026-04-16T04:26:44.662717393Z" level=info msg="StartContainer for \"d91b55c4f4b6be64d8c03e922df43701c422a8cf684ef72e31b5aebf1fd170a7\"" Apr 16 04:26:45.551110 containerd[1575]: time="2026-04-16T04:26:45.545586604Z" level=info msg="connecting to shim d91b55c4f4b6be64d8c03e922df43701c422a8cf684ef72e31b5aebf1fd170a7" address="unix:///run/containerd/s/b0f2c5cfffdebf676e7ed85c3328df6a87775c2b04620a5f0b47a494ee449f34" protocol=ttrpc version=3 Apr 16 04:26:46.382621 kubelet[2980]: E0416 04:26:46.377737 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.003s" Apr 16 04:26:47.042694 containerd[1575]: time="2026-04-16T04:26:47.030226164Z" level=info msg="received container exit event container_id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" pid:8341 exited_at:{seconds:1776313606 nanos:430983384}" Apr 16 04:26:47.801676 containerd[1575]: time="2026-04-16T04:26:47.796873070Z" level=info msg="CreateContainer within sandbox \"c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db\" for container &ContainerMetadata{Name:tigera-operator,Attempt:8,}" Apr 16 04:26:49.934744 containerd[1575]: time="2026-04-16T04:26:49.934284813Z" level=error msg="ExecSync for \"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"27bdd66ed03ad1690916962341370c247f4a37a4d6005e18a93a84f9c9122e36\": OCI runtime exec failed: exec failed: unable to start container process: reading from parent failed: fetch packet length from socket: recvfrom: connection reset by peer" Apr 16 04:26:50.003024 kubelet[2980]: E0416 04:26:50.002651 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"27bdd66ed03ad1690916962341370c247f4a37a4d6005e18a93a84f9c9122e36\": OCI runtime exec failed: exec failed: unable to start container process: reading from parent failed: fetch packet length from socket: recvfrom: connection reset by peer" containerID="839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" cmd=["/bin/calico-node","-shutdown"] Apr 16 04:26:50.232673 kubelet[2980]: E0416 04:26:50.091812 2980 kuberuntime_container.go:772] "PreStop hook failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"27bdd66ed03ad1690916962341370c247f4a37a4d6005e18a93a84f9c9122e36\": OCI runtime exec failed: exec failed: unable to start container process: reading from parent failed: fetch packet length from socket: recvfrom: connection reset by peer" pod="calico-system/calico-node-kgtx5" podUID="89b5fbad-4c87-4aac-9951-121c09bbd556" containerName="calico-node" containerID="containerd://839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" Apr 16 04:26:50.658645 kubelet[2980]: E0416 04:26:50.636902 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="4.253s" Apr 16 04:26:52.201786 containerd[1575]: time="2026-04-16T04:26:52.186736448Z" level=error msg="get state for ef5f4006e120e46314e0bc47df44076bfa2bb769b682df85956da4cb487360f0" error="context deadline exceeded" Apr 16 04:26:52.757409 containerd[1575]: time="2026-04-16T04:26:52.421313679Z" level=warning msg="unknown status" status=0 Apr 16 04:26:52.757409 containerd[1575]: time="2026-04-16T04:26:52.599377761Z" level=info msg="Container 9e1dc8db36719bdd5e0b65773f82be65f6b35c8336b569dd54078d3169a5e974: CDI devices from CRI Config.CDIDevices: []" Apr 16 04:26:54.257313 kubelet[2980]: E0416 04:26:54.250070 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.669s" Apr 16 04:26:55.806024 containerd[1575]: time="2026-04-16T04:26:55.788894955Z" level=info msg="CreateContainer within sandbox \"c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db\" for &ContainerMetadata{Name:tigera-operator,Attempt:8,} returns container id \"9e1dc8db36719bdd5e0b65773f82be65f6b35c8336b569dd54078d3169a5e974\"" Apr 16 04:26:56.912437 containerd[1575]: time="2026-04-16T04:26:56.905954722Z" level=info msg="StartContainer for \"9e1dc8db36719bdd5e0b65773f82be65f6b35c8336b569dd54078d3169a5e974\"" Apr 16 04:26:57.143439 containerd[1575]: time="2026-04-16T04:26:57.099460574Z" level=error msg="failed to handle container TaskExit event container_id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" pid:8341 exited_at:{seconds:1776313606 nanos:430983384}" error="failed to stop container: context deadline exceeded" Apr 16 04:26:57.349236 kubelet[2980]: E0416 04:26:57.340499 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.739s" Apr 16 04:26:57.465669 containerd[1575]: time="2026-04-16T04:26:57.463488470Z" level=info msg="connecting to shim 9e1dc8db36719bdd5e0b65773f82be65f6b35c8336b569dd54078d3169a5e974" address="unix:///run/containerd/s/b40817c4c3b3e5498badbc035a393ccaaa43aaaa06e8111d2e4d4485037a2b06" protocol=ttrpc version=3 Apr 16 04:26:57.671397 containerd[1575]: time="2026-04-16T04:26:57.654697075Z" level=error msg="ttrpc: received message on inactive stream" stream=223 Apr 16 04:26:57.671397 containerd[1575]: time="2026-04-16T04:26:57.657584098Z" level=error msg="ttrpc: received message on inactive stream" stream=231 Apr 16 04:26:57.776148 containerd[1575]: time="2026-04-16T04:26:57.666577745Z" level=error msg="ttrpc: received message on inactive stream" stream=233 Apr 16 04:26:58.267241 containerd[1575]: time="2026-04-16T04:26:58.266544013Z" level=info msg="TaskExit event container_id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" pid:8341 exited_at:{seconds:1776313606 nanos:430983384}" Apr 16 04:26:58.706553 containerd[1575]: time="2026-04-16T04:26:58.292143870Z" level=error msg="ExecSync for \"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Apr 16 04:26:59.249447 containerd[1575]: time="2026-04-16T04:26:59.204784874Z" level=warning msg="container event discarded" container=1d287cc31ae90fc92a6ac1a96586c7640f7ad08584dc2919f5c43556809f0090 type=CONTAINER_DELETED_EVENT Apr 16 04:26:59.268466 sshd[8923]: Accepted publickey for core from 10.0.0.1 port 52460 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:26:59.466889 kubelet[2980]: E0416 04:26:59.306889 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:26:59.466889 kubelet[2980]: E0416 04:26:59.466686 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.125s" Apr 16 04:26:59.468505 sshd-session[8923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:27:01.089901 kubelet[2980]: E0416 04:27:01.081355 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:27:01.271125 systemd-logind[1549]: New session 118 of user core. Apr 16 04:27:01.450049 systemd[1]: Started session-118.scope - Session 118 of User core. Apr 16 04:27:01.905793 containerd[1575]: time="2026-04-16T04:27:01.802940015Z" level=error msg="get state for 839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" error="context deadline exceeded" Apr 16 04:27:01.905793 containerd[1575]: time="2026-04-16T04:27:01.896736677Z" level=warning msg="unknown status" status=0 Apr 16 04:27:05.286080 kubelet[2980]: E0416 04:27:05.264737 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.782s" Apr 16 04:27:05.894773 kubelet[2980]: I0416 04:27:05.889462 2980 scope.go:122] "RemoveContainer" containerID="37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6" Apr 16 04:27:09.616716 containerd[1575]: time="2026-04-16T04:27:09.611480033Z" level=error msg="Failed to handle backOff event container_id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" pid:8341 exited_at:{seconds:1776313606 nanos:430983384} for 839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 16 04:27:10.448550 containerd[1575]: time="2026-04-16T04:27:10.442399043Z" level=error msg="ttrpc: received message on inactive stream" stream=243 Apr 16 04:27:11.077563 containerd[1575]: time="2026-04-16T04:27:11.063655249Z" level=error msg="ttrpc: received message on inactive stream" stream=245 Apr 16 04:27:11.161077 containerd[1575]: time="2026-04-16T04:27:11.085158388Z" level=error msg="ttrpc: received message on inactive stream" stream=249 Apr 16 04:27:11.251002 containerd[1575]: time="2026-04-16T04:27:11.182593634Z" level=error msg="ExecSync for \"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"50e20a021d34b3cb2d0e9c9c6f5ac503553033348210e550b5f470719d0a7316\": cannot exec in a stopped state" Apr 16 04:27:11.414241 kubelet[2980]: E0416 04:27:11.306974 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"50e20a021d34b3cb2d0e9c9c6f5ac503553033348210e550b5f470719d0a7316\": cannot exec in a stopped state" containerID="839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:27:11.762753 kubelet[2980]: E0416 04:27:11.697630 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="6.306s" Apr 16 04:27:12.000536 containerd[1575]: time="2026-04-16T04:27:11.998258624Z" level=info msg="RemoveContainer for \"37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6\"" Apr 16 04:27:12.176764 containerd[1575]: time="2026-04-16T04:27:12.166841159Z" level=info msg="TaskExit event container_id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" pid:8341 exited_at:{seconds:1776313606 nanos:430983384}" Apr 16 04:27:13.229615 containerd[1575]: time="2026-04-16T04:27:13.225801923Z" level=error msg="ExecSync for \"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"dd42de695fb1b8eac226b3f3ec340be731e220fea6e0646221b0ff54d57e2f19\": cannot exec in a stopped state" Apr 16 04:27:13.978770 kubelet[2980]: E0416 04:27:13.763069 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"dd42de695fb1b8eac226b3f3ec340be731e220fea6e0646221b0ff54d57e2f19\": cannot exec in a stopped state" containerID="839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:27:14.331949 containerd[1575]: time="2026-04-16T04:27:14.291451181Z" level=error msg="get state for 02c6cf418535f95cd5e505eea8bd25713df21afd62459a4fabb97d748d5a7c81" error="context deadline exceeded" Apr 16 04:27:14.490854 containerd[1575]: time="2026-04-16T04:27:14.366207161Z" level=warning msg="unknown status" status=0 Apr 16 04:27:14.780787 sshd[8954]: Connection closed by 10.0.0.1 port 52460 Apr 16 04:27:14.850637 sshd-session[8923]: pam_unix(sshd:session): session closed for user core Apr 16 04:27:15.583304 systemd[1]: sshd@117-10.0.0.115:22-10.0.0.1:52460.service: Deactivated successfully. Apr 16 04:27:15.701115 systemd[1]: sshd@117-10.0.0.115:22-10.0.0.1:52460.service: Consumed 4.349s CPU time, 3.5M memory peak. Apr 16 04:27:15.971538 systemd[1]: session-118.scope: Deactivated successfully. Apr 16 04:27:16.105961 systemd[1]: session-118.scope: Consumed 6.478s CPU time, 16M memory peak. Apr 16 04:27:16.358595 systemd-logind[1549]: Session 118 logged out. Waiting for processes to exit. Apr 16 04:27:16.866531 systemd-logind[1549]: Removed session 118. Apr 16 04:27:18.740315 containerd[1575]: time="2026-04-16T04:27:18.685956097Z" level=info msg="RemoveContainer for \"37e11929e4914e606146a8cf2b058fe0151d8377f69d098d2a88fb7a60684cd6\" returns successfully" Apr 16 04:27:19.402279 kubelet[2980]: E0416 04:27:18.742072 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="7.004s" Apr 16 04:27:19.402279 kubelet[2980]: I0416 04:27:18.759647 2980 scope.go:122] "RemoveContainer" containerID="a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9" Apr 16 04:27:18.941309 systemd[1]: Started cri-containerd-d91b55c4f4b6be64d8c03e922df43701c422a8cf684ef72e31b5aebf1fd170a7.scope - libcontainer container d91b55c4f4b6be64d8c03e922df43701c422a8cf684ef72e31b5aebf1fd170a7. Apr 16 04:27:19.880399 kubelet[2980]: E0416 04:27:19.681161 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:27:20.046827 containerd[1575]: time="2026-04-16T04:27:19.533307675Z" level=error msg="get state for 839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" error="context deadline exceeded" Apr 16 04:27:20.046827 containerd[1575]: time="2026-04-16T04:27:19.706704735Z" level=warning msg="unknown status" status=0 Apr 16 04:27:21.873734 systemd[1]: Started sshd@118-10.0.0.115:22-10.0.0.1:51088.service - OpenSSH per-connection server daemon (10.0.0.1:51088). Apr 16 04:27:22.732529 containerd[1575]: time="2026-04-16T04:27:22.699001074Z" level=error msg="Failed to handle backOff event container_id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" pid:8341 exited_at:{seconds:1776313606 nanos:430983384} for 839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 16 04:27:24.944456 containerd[1575]: time="2026-04-16T04:27:24.937436511Z" level=error msg="ttrpc: received message on inactive stream" stream=263 Apr 16 04:27:24.944456 containerd[1575]: time="2026-04-16T04:27:24.938563104Z" level=error msg="ttrpc: received message on inactive stream" stream=267 Apr 16 04:27:24.944456 containerd[1575]: time="2026-04-16T04:27:24.938588227Z" level=error msg="ttrpc: received message on inactive stream" stream=269 Apr 16 04:27:25.845561 containerd[1575]: time="2026-04-16T04:27:25.711603364Z" level=error msg="ExecSync for \"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"749441932ba9b3c95a3c48607265a8cfa69975f791490283176d10aabbf671dd\": cannot exec in a stopped state" Apr 16 04:27:26.225730 kubelet[2980]: E0416 04:27:26.213953 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"749441932ba9b3c95a3c48607265a8cfa69975f791490283176d10aabbf671dd\": cannot exec in a stopped state" containerID="839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:27:26.881894 containerd[1575]: time="2026-04-16T04:27:26.858434707Z" level=info msg="RemoveContainer for \"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\"" Apr 16 04:27:27.240482 containerd[1575]: time="2026-04-16T04:27:27.191017882Z" level=info msg="TaskExit event container_id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" pid:8341 exited_at:{seconds:1776313606 nanos:430983384}" Apr 16 04:27:28.862558 containerd[1575]: time="2026-04-16T04:27:28.862257878Z" level=error msg="get state for c4eca71c28ab0a87d106090e5ac6bae255970f633a98c0f60f557bad7f5868db" error="context deadline exceeded" Apr 16 04:27:29.396589 containerd[1575]: time="2026-04-16T04:27:28.868330384Z" level=warning msg="unknown status" status=0 Apr 16 04:27:30.007915 kubelet[2980]: E0416 04:27:29.992181 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.246s" Apr 16 04:27:31.361584 containerd[1575]: time="2026-04-16T04:27:31.348841388Z" level=info msg="RemoveContainer for \"a0acbc3a4b294f85789dc96c2b791f2c92905ea7675b1fab59d929d7462ab5d9\" returns successfully" Apr 16 04:27:32.881253 containerd[1575]: time="2026-04-16T04:27:32.864945762Z" level=error msg="ExecSync for \"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"ea4a2cc0966b96c287f982361f6020d9f46e3431e0a359135eee49b8dd1e59e2\": cannot exec in a stopped state" Apr 16 04:27:33.667897 kubelet[2980]: E0416 04:27:33.662034 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.664s" Apr 16 04:27:34.040041 kubelet[2980]: E0416 04:27:34.034831 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"ea4a2cc0966b96c287f982361f6020d9f46e3431e0a359135eee49b8dd1e59e2\": cannot exec in a stopped state" containerID="839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:27:36.639893 containerd[1575]: time="2026-04-16T04:27:36.627016343Z" level=error msg="get state for 839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" error="context deadline exceeded" Apr 16 04:27:37.053305 containerd[1575]: time="2026-04-16T04:27:36.808428051Z" level=warning msg="unknown status" status=0 Apr 16 04:27:38.698405 containerd[1575]: time="2026-04-16T04:27:38.671246896Z" level=error msg="Failed to handle backOff event container_id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" pid:8341 exited_at:{seconds:1776313606 nanos:430983384} for 839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 16 04:27:40.101741 containerd[1575]: time="2026-04-16T04:27:39.831820267Z" level=error msg="ttrpc: received message on inactive stream" stream=283 Apr 16 04:27:40.455778 containerd[1575]: time="2026-04-16T04:27:40.270471522Z" level=error msg="ttrpc: received message on inactive stream" stream=285 Apr 16 04:27:40.496823 containerd[1575]: time="2026-04-16T04:27:40.477726754Z" level=error msg="ttrpc: received message on inactive stream" stream=287 Apr 16 04:27:41.077872 containerd[1575]: time="2026-04-16T04:27:41.067185688Z" level=error msg="ExecSync for \"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"2d3f4f5dbd059b251f8169d0957a0d22516c600ae78d0dc26f7b555ef900cbd5\": cannot exec in a stopped state" Apr 16 04:27:41.252360 kubelet[2980]: E0416 04:27:41.169217 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"2d3f4f5dbd059b251f8169d0957a0d22516c600ae78d0dc26f7b555ef900cbd5\": cannot exec in a stopped state" containerID="839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:27:41.656153 sshd[8988]: Accepted publickey for core from 10.0.0.1 port 51088 ssh2: RSA SHA256:KOGEJiHfbLr/luAneL7Ny+uxCdk+SGypBXA3NqRQDAk Apr 16 04:27:43.076041 sshd-session[8988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 16 04:27:43.819179 containerd[1575]: time="2026-04-16T04:27:43.793434302Z" level=error msg="ExecSync for \"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"a9c536b849f518b414c2a6190cb2e07631d31f47829a1f92f1a8a285d51be9a4\": cannot exec in a stopped state" Apr 16 04:27:44.739441 kubelet[2980]: E0416 04:27:44.681529 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"a9c536b849f518b414c2a6190cb2e07631d31f47829a1f92f1a8a285d51be9a4\": cannot exec in a stopped state" containerID="839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:27:45.427133 systemd-logind[1549]: New session 119 of user core. Apr 16 04:27:45.580413 systemd[1]: Started session-119.scope - Session 119 of User core. Apr 16 04:27:46.229803 kubelet[2980]: E0416 04:27:45.439955 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="11.604s" Apr 16 04:27:47.195361 containerd[1575]: time="2026-04-16T04:27:47.194668319Z" level=info msg="TaskExit event container_id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" pid:8341 exited_at:{seconds:1776313606 nanos:430983384}" Apr 16 04:27:48.700242 kubelet[2980]: E0416 04:27:48.690624 2980 controller.go:251] "Failed to update lease" err="Put \"https://10.0.0.115:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded" Apr 16 04:27:52.632281 containerd[1575]: time="2026-04-16T04:27:52.629185670Z" level=error msg="ExecSync for \"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"7296d63df40bdad46a154e2c9876b29788512630be40658441048fb8b7a4eec5\": cannot exec in a stopped state" Apr 16 04:27:52.703933 kubelet[2980]: E0416 04:27:52.686240 2980 controller.go:251] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"localhost\": the object has been modified; please apply your changes to the latest version and try again" Apr 16 04:27:52.944059 kubelet[2980]: E0416 04:27:52.776490 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"7296d63df40bdad46a154e2c9876b29788512630be40658441048fb8b7a4eec5\": cannot exec in a stopped state" containerID="839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:27:53.403583 systemd[1]: Started cri-containerd-9e1dc8db36719bdd5e0b65773f82be65f6b35c8336b569dd54078d3169a5e974.scope - libcontainer container 9e1dc8db36719bdd5e0b65773f82be65f6b35c8336b569dd54078d3169a5e974. Apr 16 04:27:55.302810 kubelet[2980]: E0416 04:27:55.287840 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="9.396s" Apr 16 04:27:57.077261 containerd[1575]: time="2026-04-16T04:27:57.044528826Z" level=error msg="get state for 839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" error="context deadline exceeded" Apr 16 04:27:57.077261 containerd[1575]: time="2026-04-16T04:27:57.048345423Z" level=warning msg="unknown status" status=0 Apr 16 04:27:57.458684 containerd[1575]: time="2026-04-16T04:27:57.455886846Z" level=error msg="Failed to handle backOff event container_id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" id:\"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" pid:8341 exited_at:{seconds:1776313606 nanos:430983384} for 839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" error="failed to handle container TaskExit event: failed to stop container: context deadline exceeded" Apr 16 04:27:58.058463 kubelet[2980]: E0416 04:27:58.055641 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.687s" Apr 16 04:27:58.295248 containerd[1575]: time="2026-04-16T04:27:58.105822693Z" level=error msg="ttrpc: received message on inactive stream" stream=307 Apr 16 04:27:58.295248 containerd[1575]: time="2026-04-16T04:27:58.273556127Z" level=error msg="ttrpc: received message on inactive stream" stream=309 Apr 16 04:27:58.295248 containerd[1575]: time="2026-04-16T04:27:58.290390180Z" level=error msg="ttrpc: received message on inactive stream" stream=303 Apr 16 04:27:58.538714 containerd[1575]: time="2026-04-16T04:27:58.523698543Z" level=error msg="ExecSync for \"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"94dde91d2101c6e80913eacc83cd806a5611e7290f2c11110990398d79ae8351\": cannot exec in a stopped state" Apr 16 04:27:58.853728 kubelet[2980]: E0416 04:27:58.799548 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"94dde91d2101c6e80913eacc83cd806a5611e7290f2c11110990398d79ae8351\": cannot exec in a stopped state" containerID="839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:27:59.282699 containerd[1575]: time="2026-04-16T04:27:59.281665645Z" level=error msg="get state for 9e1dc8db36719bdd5e0b65773f82be65f6b35c8336b569dd54078d3169a5e974" error="context deadline exceeded" Apr 16 04:27:59.836608 containerd[1575]: time="2026-04-16T04:27:59.292248413Z" level=warning msg="unknown status" status=0 Apr 16 04:28:00.669845 containerd[1575]: time="2026-04-16T04:28:00.660503327Z" level=error msg="ExecSync for \"839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"b511686127572ff1fac66358e9dd879abe5104c67f4eddadef155eb457b46380\": cannot exec in a stopped state" Apr 16 04:28:01.064561 kubelet[2980]: E0416 04:28:00.981051 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.724s" Apr 16 04:28:02.054194 kubelet[2980]: E0416 04:28:01.551776 2980 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"b511686127572ff1fac66358e9dd879abe5104c67f4eddadef155eb457b46380\": cannot exec in a stopped state" containerID="839e9646f13ccde2cfe315492558182ebfc3407c183920bb712f4a682127b4fd" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Apr 16 04:28:02.808125 kubelet[2980]: E0416 04:28:02.801451 2980 kubelet.go:2691] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.721s" Apr 16 04:28:02.856540 kubelet[2980]: I0416 04:28:02.853033 2980 scope.go:122] "RemoveContainer" containerID="2e964e8d7e38e920ee3a115fd39adfdd1f2b38ca470ac04ada91068424a93da4" Apr 16 04:28:02.856540 kubelet[2980]: E0416 04:28:02.854959 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 16 04:28:02.857723 sshd[9011]: Connection closed by 10.0.0.1 port 51088 Apr 16 04:28:02.895925 containerd[1575]: time="2026-04-16T04:28:02.870701741Z" level=error msg="ttrpc: received message on inactive stream" stream=127 Apr 16 04:28:02.905986 sshd-session[8988]: pam_unix(sshd:session): session closed for user core Apr 16 04:28:02.978272 kubelet[2980]: E0416 04:28:02.917834 2980 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-scheduler pod=kube-scheduler-localhost_kube-system(f7c88b30fc803a3ec6b6c138191bdaca)\"" pod="kube-system/kube-scheduler-localhost" podUID="f7c88b30fc803a3ec6b6c138191bdaca" Apr 16 04:28:03.130461 systemd[1]: sshd@118-10.0.0.115:22-10.0.0.1:51088.service: Deactivated successfully. Apr 16 04:28:03.131899 systemd[1]: sshd@118-10.0.0.115:22-10.0.0.1:51088.service: Consumed 4.958s CPU time, 3.5M memory peak. Apr 16 04:28:03.180796 systemd[1]: session-119.scope: Deactivated successfully. Apr 16 04:28:03.212938 systemd[1]: session-119.scope: Consumed 6.589s CPU time, 15.7M memory peak. Apr 16 04:28:03.217921 systemd-logind[1549]: Session 119 logged out. Waiting for processes to exit. Apr 16 04:28:03.225976 systemd-logind[1549]: Removed session 119. Apr 16 04:28:03.293552 containerd[1575]: time="2026-04-16T04:28:03.289681168Z" level=error msg="get state for 9e1dc8db36719bdd5e0b65773f82be65f6b35c8336b569dd54078d3169a5e974" error="context deadline exceeded" Apr 16 04:28:03.293552 containerd[1575]: time="2026-04-16T04:28:03.295495139Z" level=warning msg="unknown status" status=0 Apr 16 04:28:03.462951 containerd[1575]: time="2026-04-16T04:28:03.462072611Z" level=info msg="StartContainer for \"d91b55c4f4b6be64d8c03e922df43701c422a8cf684ef72e31b5aebf1fd170a7\" returns successfully" Apr 16 04:28:04.686686 kubelet[2980]: E0416 04:28:04.590578 2980 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"