Apr 21 03:51:58.665697 kernel: Linux version 6.12.81-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 20 22:35:05 -00 2026 Apr 21 03:51:58.665801 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bff44a95b1e301b8c626c31d9593bbb30c469579bd546b0b84b6f8eaed8c72f7 Apr 21 03:51:58.665818 kernel: BIOS-provided physical RAM map: Apr 21 03:51:58.665823 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 21 03:51:58.665829 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 21 03:51:58.665834 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 21 03:51:58.665840 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 21 03:51:58.665845 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 21 03:51:58.665859 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 21 03:51:58.665866 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 21 03:51:58.665871 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 21 03:51:58.665882 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 21 03:51:58.665890 kernel: NX (Execute Disable) protection: active Apr 21 03:51:58.665897 kernel: APIC: Static calls initialized Apr 21 03:51:58.665905 kernel: SMBIOS 2.8 present. Apr 21 03:51:58.665915 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 21 03:51:58.666499 kernel: DMI: Memory slots populated: 1/1 Apr 21 03:51:58.666571 kernel: Hypervisor detected: KVM Apr 21 03:51:58.666578 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 21 03:51:58.666583 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 03:51:58.666588 kernel: kvm-clock: using sched offset of 10249666235 cycles Apr 21 03:51:58.666598 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 03:51:58.666615 kernel: tsc: Detected 2793.438 MHz processor Apr 21 03:51:58.666620 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 03:51:58.666676 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 03:51:58.666685 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 21 03:51:58.666733 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 21 03:51:58.666741 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 03:51:58.666748 kernel: Using GB pages for direct mapping Apr 21 03:51:58.666755 kernel: ACPI: Early table checksum verification disabled Apr 21 03:51:58.666762 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 21 03:51:58.666767 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 03:51:58.666773 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 03:51:58.666778 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 03:51:58.666785 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 21 03:51:58.666793 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 03:51:58.666813 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 03:51:58.666819 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 03:51:58.666824 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 21 03:51:58.666829 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 21 03:51:58.666837 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 21 03:51:58.666847 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 21 03:51:58.666854 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 21 03:51:58.666859 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 21 03:51:58.666864 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 21 03:51:58.666869 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 21 03:51:58.666874 kernel: No NUMA configuration found Apr 21 03:51:58.666881 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 21 03:51:58.666888 kernel: NODE_DATA(0) allocated [mem 0x9cfd4dc0-0x9cfdbfff] Apr 21 03:51:58.666895 kernel: Zone ranges: Apr 21 03:51:58.666903 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 03:51:58.666912 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 21 03:51:58.666923 kernel: Normal empty Apr 21 03:51:58.666931 kernel: Device empty Apr 21 03:51:58.666938 kernel: Movable zone start for each node Apr 21 03:51:58.666943 kernel: Early memory node ranges Apr 21 03:51:58.666948 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 21 03:51:58.666954 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 21 03:51:58.666959 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 21 03:51:58.667024 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 03:51:58.667031 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 21 03:51:58.667182 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 21 03:51:58.667227 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 21 03:51:58.667256 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 03:51:58.667263 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 21 03:51:58.667268 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 21 03:51:58.667274 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 03:51:58.667296 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 03:51:58.667308 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 03:51:58.667314 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 03:51:58.667319 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 03:51:58.667324 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 03:51:58.667330 kernel: TSC deadline timer available Apr 21 03:51:58.667335 kernel: CPU topo: Max. logical packages: 1 Apr 21 03:51:58.667340 kernel: CPU topo: Max. logical dies: 1 Apr 21 03:51:58.667345 kernel: CPU topo: Max. dies per package: 1 Apr 21 03:51:58.667351 kernel: CPU topo: Max. threads per core: 1 Apr 21 03:51:58.667359 kernel: CPU topo: Num. cores per package: 4 Apr 21 03:51:58.667364 kernel: CPU topo: Num. threads per package: 4 Apr 21 03:51:58.667369 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Apr 21 03:51:58.667375 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 03:51:58.667380 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 21 03:51:58.667385 kernel: kvm-guest: setup PV sched yield Apr 21 03:51:58.667392 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 21 03:51:58.667397 kernel: Booting paravirtualized kernel on KVM Apr 21 03:51:58.667403 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 03:51:58.667409 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 21 03:51:58.667416 kernel: percpu: Embedded 60 pages/cpu s207448 r8192 d30120 u524288 Apr 21 03:51:58.667422 kernel: pcpu-alloc: s207448 r8192 d30120 u524288 alloc=1*2097152 Apr 21 03:51:58.667427 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 21 03:51:58.667432 kernel: kvm-guest: PV spinlocks enabled Apr 21 03:51:58.667437 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 03:51:58.667444 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bff44a95b1e301b8c626c31d9593bbb30c469579bd546b0b84b6f8eaed8c72f7 Apr 21 03:51:58.667449 kernel: random: crng init done Apr 21 03:51:58.667455 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 21 03:51:58.667462 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 03:51:58.667467 kernel: Fallback order for Node 0: 0 Apr 21 03:51:58.667472 kernel: Built 1 zonelists, mobility grouping on. Total pages: 642938 Apr 21 03:51:58.667477 kernel: Policy zone: DMA32 Apr 21 03:51:58.667483 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 03:51:58.667488 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 21 03:51:58.667493 kernel: ftrace: allocating 40126 entries in 157 pages Apr 21 03:51:58.667499 kernel: ftrace: allocated 157 pages with 5 groups Apr 21 03:51:58.667504 kernel: Dynamic Preempt: voluntary Apr 21 03:51:58.667511 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 03:51:58.667517 kernel: rcu: RCU event tracing is enabled. Apr 21 03:51:58.667522 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 21 03:51:58.667528 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 03:51:58.667533 kernel: Rude variant of Tasks RCU enabled. Apr 21 03:51:58.667545 kernel: Tracing variant of Tasks RCU enabled. Apr 21 03:51:58.667550 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 03:51:58.667556 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 21 03:51:58.667563 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 03:51:58.667571 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 03:51:58.667576 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 21 03:51:58.667582 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 21 03:51:58.667587 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 03:51:58.667593 kernel: Console: colour VGA+ 80x25 Apr 21 03:51:58.667608 kernel: printk: legacy console [ttyS0] enabled Apr 21 03:51:58.667619 kernel: ACPI: Core revision 20240827 Apr 21 03:51:58.667629 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 21 03:51:58.667637 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 03:51:58.667642 kernel: x2apic enabled Apr 21 03:51:58.667648 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 03:51:58.667653 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 21 03:51:58.667669 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 21 03:51:58.667675 kernel: kvm-guest: setup PV IPIs Apr 21 03:51:58.667681 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 21 03:51:58.667687 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 03:51:58.667714 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 21 03:51:58.667725 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 21 03:51:58.667734 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 21 03:51:58.667744 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 21 03:51:58.667751 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 03:51:58.667756 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 03:51:58.667762 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 03:51:58.667768 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 03:51:58.667774 kernel: RETBleed: Vulnerable Apr 21 03:51:58.667782 kernel: Speculative Store Bypass: Vulnerable Apr 21 03:51:58.667787 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 03:51:58.667793 kernel: GDS: Unknown: Dependent on hypervisor status Apr 21 03:51:58.667799 kernel: active return thunk: its_return_thunk Apr 21 03:51:58.667804 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 03:51:58.667810 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 03:51:58.667816 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 03:51:58.667821 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 03:51:58.667827 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 03:51:58.667834 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 03:51:58.667840 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 03:51:58.667845 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 03:51:58.667851 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 21 03:51:58.667857 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 21 03:51:58.667862 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 21 03:51:58.667868 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 21 03:51:58.667874 kernel: Freeing SMP alternatives memory: 32K Apr 21 03:51:58.667888 kernel: pid_max: default: 32768 minimum: 301 Apr 21 03:51:58.667896 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Apr 21 03:51:58.667901 kernel: landlock: Up and running. Apr 21 03:51:58.667907 kernel: SELinux: Initializing. Apr 21 03:51:58.667913 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 03:51:58.667919 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 21 03:51:58.667925 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 21 03:51:58.667930 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 21 03:51:58.667936 kernel: signal: max sigframe size: 3632 Apr 21 03:51:58.667942 kernel: rcu: Hierarchical SRCU implementation. Apr 21 03:51:58.667949 kernel: rcu: Max phase no-delay instances is 400. Apr 21 03:51:58.667955 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Apr 21 03:51:58.667960 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 03:51:58.667966 kernel: smp: Bringing up secondary CPUs ... Apr 21 03:51:58.667972 kernel: smpboot: x86: Booting SMP configuration: Apr 21 03:51:58.667978 kernel: .... node #0, CPUs: #1 #2 #3 Apr 21 03:51:58.667983 kernel: smp: Brought up 1 node, 4 CPUs Apr 21 03:51:58.667989 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 21 03:51:58.667995 kernel: Memory: 2419756K/2571752K available (14336K kernel code, 2453K rwdata, 26076K rodata, 46228K init, 2520K bss, 146108K reserved, 0K cma-reserved) Apr 21 03:51:58.668002 kernel: devtmpfs: initialized Apr 21 03:51:58.668008 kernel: x86/mm: Memory block size: 128MB Apr 21 03:51:58.668014 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 03:51:58.668020 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 21 03:51:58.668025 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 03:51:58.668031 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 03:51:58.668049 kernel: audit: initializing netlink subsys (disabled) Apr 21 03:51:58.668056 kernel: audit: type=2000 audit(1776743508.997:1): state=initialized audit_enabled=0 res=1 Apr 21 03:51:58.668062 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 03:51:58.668069 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 03:51:58.668075 kernel: cpuidle: using governor menu Apr 21 03:51:58.668081 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 03:51:58.668087 kernel: dca service started, version 1.12.1 Apr 21 03:51:58.668092 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Apr 21 03:51:58.668098 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 21 03:51:58.668104 kernel: PCI: Using configuration type 1 for base access Apr 21 03:51:58.668109 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 03:51:58.668115 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 03:51:58.668122 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 03:51:58.668128 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 03:51:58.668134 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 03:51:58.668139 kernel: ACPI: Added _OSI(Module Device) Apr 21 03:51:58.668145 kernel: ACPI: Added _OSI(Processor Device) Apr 21 03:51:58.668326 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 03:51:58.668333 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 21 03:51:58.668338 kernel: ACPI: Interpreter enabled Apr 21 03:51:58.668344 kernel: ACPI: PM: (supports S0 S3 S5) Apr 21 03:51:58.668352 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 03:51:58.668358 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 03:51:58.668363 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 03:51:58.668369 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 21 03:51:58.668375 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 03:51:58.668529 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 21 03:51:58.668591 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 21 03:51:58.668646 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 21 03:51:58.668653 kernel: PCI host bridge to bus 0000:00 Apr 21 03:51:58.668730 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 03:51:58.668781 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 03:51:58.668828 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 03:51:58.668874 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 21 03:51:58.668920 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 21 03:51:58.668968 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 21 03:51:58.669276 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 03:51:58.669659 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Apr 21 03:51:58.669731 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Apr 21 03:51:58.669824 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Apr 21 03:51:58.669881 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Apr 21 03:51:58.669937 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Apr 21 03:51:58.670034 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 03:51:58.670119 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Apr 21 03:51:58.670519 kernel: pci 0000:00:02.0: BAR 0 [io 0xc0c0-0xc0df] Apr 21 03:51:58.670641 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Apr 21 03:51:58.670696 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Apr 21 03:51:58.670820 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Apr 21 03:51:58.670916 kernel: pci 0000:00:03.0: BAR 0 [io 0xc000-0xc07f] Apr 21 03:51:58.673370 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Apr 21 03:51:58.674527 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Apr 21 03:51:58.675863 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Apr 21 03:51:58.676568 kernel: pci 0000:00:04.0: BAR 0 [io 0xc0e0-0xc0ff] Apr 21 03:51:58.676659 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfebd3000-0xfebd3fff] Apr 21 03:51:58.676715 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 21 03:51:58.676795 kernel: pci 0000:00:04.0: ROM [mem 0xfeb80000-0xfebbffff pref] Apr 21 03:51:58.676884 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Apr 21 03:51:58.676961 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 21 03:51:58.677256 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Apr 21 03:51:58.677352 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc100-0xc11f] Apr 21 03:51:58.677406 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd4000-0xfebd4fff] Apr 21 03:51:58.677467 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Apr 21 03:51:58.677525 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Apr 21 03:51:58.677533 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 03:51:58.677540 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 03:51:58.677546 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 03:51:58.677552 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 03:51:58.677558 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 21 03:51:58.677564 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 21 03:51:58.677570 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 21 03:51:58.677578 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 21 03:51:58.677584 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 21 03:51:58.677589 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 21 03:51:58.677595 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 21 03:51:58.677601 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 21 03:51:58.677607 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 21 03:51:58.677613 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 21 03:51:58.677619 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 21 03:51:58.677624 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 21 03:51:58.677632 kernel: iommu: Default domain type: Translated Apr 21 03:51:58.677638 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 03:51:58.677644 kernel: PCI: Using ACPI for IRQ routing Apr 21 03:51:58.677649 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 03:51:58.677655 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 21 03:51:58.677661 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 21 03:51:58.677715 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 21 03:51:58.677791 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 21 03:51:58.677873 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 03:51:58.677882 kernel: vgaarb: loaded Apr 21 03:51:58.677888 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 21 03:51:58.677894 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 21 03:51:58.677899 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 03:51:58.677905 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 03:51:58.677911 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 03:51:58.677917 kernel: pnp: PnP ACPI init Apr 21 03:51:58.677981 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 21 03:51:58.677993 kernel: pnp: PnP ACPI: found 6 devices Apr 21 03:51:58.677999 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 03:51:58.678005 kernel: NET: Registered PF_INET protocol family Apr 21 03:51:58.678011 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 21 03:51:58.678017 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 21 03:51:58.678023 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 03:51:58.678029 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 03:51:58.678035 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 21 03:51:58.678293 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 21 03:51:58.678331 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 03:51:58.678337 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 21 03:51:58.678345 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 03:51:58.678352 kernel: NET: Registered PF_XDP protocol family Apr 21 03:51:58.678456 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 03:51:58.678544 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 03:51:58.678619 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 03:51:58.678710 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 21 03:51:58.678784 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 21 03:51:58.678854 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 21 03:51:58.678867 kernel: PCI: CLS 0 bytes, default 64 Apr 21 03:51:58.678876 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 03:51:58.678886 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 21 03:51:58.678896 kernel: Initialise system trusted keyrings Apr 21 03:51:58.678907 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 21 03:51:58.678916 kernel: Key type asymmetric registered Apr 21 03:51:58.678926 kernel: Asymmetric key parser 'x509' registered Apr 21 03:51:58.678939 kernel: hrtimer: interrupt took 7817754 ns Apr 21 03:51:58.678950 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Apr 21 03:51:58.678956 kernel: io scheduler mq-deadline registered Apr 21 03:51:58.678962 kernel: io scheduler kyber registered Apr 21 03:51:58.678968 kernel: io scheduler bfq registered Apr 21 03:51:58.678974 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 03:51:58.678981 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 21 03:51:58.678987 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 21 03:51:58.678993 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 21 03:51:58.679001 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 03:51:58.679007 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 03:51:58.679013 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 03:51:58.679019 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 03:51:58.679024 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 03:51:58.679385 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 21 03:51:58.679434 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 03:51:58.679487 kernel: rtc_cmos 00:04: registered as rtc0 Apr 21 03:51:58.679612 kernel: rtc_cmos 00:04: setting system clock to 2026-04-21T03:51:57 UTC (1776743517) Apr 21 03:51:58.679666 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 21 03:51:58.679683 kernel: intel_pstate: CPU model not supported Apr 21 03:51:58.679693 kernel: NET: Registered PF_INET6 protocol family Apr 21 03:51:58.679701 kernel: Segment Routing with IPv6 Apr 21 03:51:58.679709 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 03:51:58.679719 kernel: NET: Registered PF_PACKET protocol family Apr 21 03:51:58.679726 kernel: Key type dns_resolver registered Apr 21 03:51:58.679755 kernel: IPI shorthand broadcast: enabled Apr 21 03:51:58.679779 kernel: sched_clock: Marking stable (7769078174, 474905821)->(8837731063, -593747068) Apr 21 03:51:58.679786 kernel: registered taskstats version 1 Apr 21 03:51:58.679792 kernel: Loading compiled-in X.509 certificates Apr 21 03:51:58.679798 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.81-flatcar: bc6d78cd9d700d9d34e2c2c5bd3cbf2a73898336' Apr 21 03:51:58.679807 kernel: Demotion targets for Node 0: null Apr 21 03:51:58.679815 kernel: Key type .fscrypt registered Apr 21 03:51:58.679823 kernel: Key type fscrypt-provisioning registered Apr 21 03:51:58.679832 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 03:51:58.679838 kernel: ima: Allocated hash algorithm: sha1 Apr 21 03:51:58.679851 kernel: ima: No architecture policies found Apr 21 03:51:58.679857 kernel: clk: Disabling unused clocks Apr 21 03:51:58.679864 kernel: Warning: unable to open an initial console. Apr 21 03:51:58.679872 kernel: Freeing unused kernel image (initmem) memory: 46228K Apr 21 03:51:58.679878 kernel: Write protecting the kernel read-only data: 40960k Apr 21 03:51:58.679884 kernel: Freeing unused kernel image (rodata/data gap) memory: 548K Apr 21 03:51:58.679890 kernel: Run /init as init process Apr 21 03:51:58.679896 kernel: with arguments: Apr 21 03:51:58.679905 kernel: /init Apr 21 03:51:58.679917 kernel: with environment: Apr 21 03:51:58.679922 kernel: HOME=/ Apr 21 03:51:58.679928 kernel: TERM=linux Apr 21 03:51:58.679935 systemd[1]: Successfully made /usr/ read-only. Apr 21 03:51:58.679945 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 21 03:51:58.679971 systemd[1]: Detected virtualization kvm. Apr 21 03:51:58.679984 systemd[1]: Detected architecture x86-64. Apr 21 03:51:58.679992 systemd[1]: Running in initrd. Apr 21 03:51:58.679998 systemd[1]: No hostname configured, using default hostname. Apr 21 03:51:58.680004 systemd[1]: Hostname set to . Apr 21 03:51:58.680011 systemd[1]: Initializing machine ID from VM UUID. Apr 21 03:51:58.680017 systemd[1]: Queued start job for default target initrd.target. Apr 21 03:51:58.680023 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 03:51:58.680031 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 03:51:58.680446 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 03:51:58.680552 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 03:51:58.680563 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 03:51:58.680571 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 03:51:58.680579 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 03:51:58.680586 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 03:51:58.680606 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 03:51:58.680613 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 03:51:58.680619 systemd[1]: Reached target paths.target - Path Units. Apr 21 03:51:58.680626 systemd[1]: Reached target slices.target - Slice Units. Apr 21 03:51:58.680632 systemd[1]: Reached target swap.target - Swaps. Apr 21 03:51:58.680639 systemd[1]: Reached target timers.target - Timer Units. Apr 21 03:51:58.680645 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 03:51:58.680652 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 03:51:58.680659 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 03:51:58.680667 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 21 03:51:58.680673 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 03:51:58.680680 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 03:51:58.680686 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 03:51:58.680692 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 03:51:58.680699 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 03:51:58.680707 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 03:51:58.680714 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 03:51:58.680720 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Apr 21 03:51:58.680727 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 03:51:58.680733 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 03:51:58.680739 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 03:51:58.680746 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 03:51:58.684846 systemd-journald[202]: Collecting audit messages is disabled. Apr 21 03:51:58.685017 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 03:51:58.685029 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 03:51:58.685056 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 03:51:58.685066 systemd-journald[202]: Journal started Apr 21 03:51:58.685082 systemd-journald[202]: Runtime Journal (/run/log/journal/cca566dc6a334942818e3fd9b58aed66) is 6M, max 48.2M, 42.2M free. Apr 21 03:51:58.700425 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 03:51:58.715834 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 03:51:58.732368 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 03:51:58.748111 systemd-modules-load[205]: Inserted module 'overlay' Apr 21 03:51:58.984537 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 03:51:58.984599 kernel: Bridge firewalling registered Apr 21 03:51:58.772217 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 03:51:58.780947 systemd-tmpfiles[217]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Apr 21 03:51:58.829052 systemd-modules-load[205]: Inserted module 'br_netfilter' Apr 21 03:51:58.987999 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 03:51:58.991475 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 03:51:58.996978 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 03:51:59.028676 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 03:51:59.035913 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 03:51:59.041018 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 03:51:59.093863 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 03:51:59.101591 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 03:51:59.105471 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 03:51:59.142527 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 03:51:59.190360 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 03:51:59.306897 systemd-resolved[239]: Positive Trust Anchors: Apr 21 03:51:59.307107 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 03:51:59.307134 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 03:51:59.315025 systemd-resolved[239]: Defaulting to hostname 'linux'. Apr 21 03:51:59.337251 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=bff44a95b1e301b8c626c31d9593bbb30c469579bd546b0b84b6f8eaed8c72f7 Apr 21 03:51:59.318085 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 03:51:59.335933 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 03:51:59.807595 kernel: SCSI subsystem initialized Apr 21 03:51:59.824298 kernel: Loading iSCSI transport class v2.0-870. Apr 21 03:51:59.885774 kernel: iscsi: registered transport (tcp) Apr 21 03:51:59.965179 kernel: iscsi: registered transport (qla4xxx) Apr 21 03:51:59.965750 kernel: QLogic iSCSI HBA Driver Apr 21 03:52:00.114991 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 03:52:00.179635 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 03:52:00.183336 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 03:52:00.899576 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 03:52:00.911039 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 03:52:01.211691 kernel: raid6: avx512x4 gen() 5764 MB/s Apr 21 03:52:01.238917 kernel: raid6: avx512x2 gen() 27823 MB/s Apr 21 03:52:01.271343 kernel: raid6: avx512x1 gen() 16270 MB/s Apr 21 03:52:01.287482 kernel: raid6: avx2x4 gen() 24783 MB/s Apr 21 03:52:01.304443 kernel: raid6: avx2x2 gen() 25100 MB/s Apr 21 03:52:01.340043 kernel: raid6: avx2x1 gen() 8096 MB/s Apr 21 03:52:01.340697 kernel: raid6: using algorithm avx512x2 gen() 27823 MB/s Apr 21 03:52:01.366628 kernel: raid6: .... xor() 16228 MB/s, rmw enabled Apr 21 03:52:01.367914 kernel: raid6: using avx512x2 recovery algorithm Apr 21 03:52:01.469255 kernel: xor: automatically using best checksumming function avx Apr 21 03:52:02.142279 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 03:52:02.190393 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 03:52:02.201548 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 03:52:02.292501 systemd-udevd[452]: Using default interface naming scheme 'v255'. Apr 21 03:52:02.321817 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 03:52:02.377336 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 03:52:02.524722 dracut-pre-trigger[455]: rd.md=0: removing MD RAID activation Apr 21 03:52:02.778326 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 03:52:02.790930 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 03:52:03.014264 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 03:52:03.029096 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 03:52:03.185762 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 21 03:52:03.232205 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 03:52:03.274599 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 21 03:52:03.286764 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 03:52:03.287037 kernel: GPT:9289727 != 19775487 Apr 21 03:52:03.287109 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 03:52:03.287126 kernel: GPT:9289727 != 19775487 Apr 21 03:52:03.289059 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 03:52:03.290490 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 03:52:03.392506 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 03:52:03.392685 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 03:52:03.406302 kernel: AES CTR mode by8 optimization enabled Apr 21 03:52:03.406354 kernel: libata version 3.00 loaded. Apr 21 03:52:03.401865 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 03:52:03.430516 kernel: ahci 0000:00:1f.2: version 3.0 Apr 21 03:52:03.433187 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 21 03:52:03.433216 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Apr 21 03:52:03.504596 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Apr 21 03:52:03.505702 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 21 03:52:03.503726 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 03:52:03.527005 kernel: scsi host0: ahci Apr 21 03:52:03.527397 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Apr 21 03:52:03.527411 kernel: scsi host1: ahci Apr 21 03:52:03.527489 kernel: scsi host2: ahci Apr 21 03:52:03.518058 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 21 03:52:03.532560 kernel: scsi host3: ahci Apr 21 03:52:03.557408 kernel: scsi host4: ahci Apr 21 03:52:03.622195 kernel: scsi host5: ahci Apr 21 03:52:03.660137 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 21 03:52:03.666333 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 lpm-pol 1 Apr 21 03:52:03.666594 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 lpm-pol 1 Apr 21 03:52:03.666615 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 lpm-pol 1 Apr 21 03:52:03.669315 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 lpm-pol 1 Apr 21 03:52:03.669713 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 lpm-pol 1 Apr 21 03:52:03.669880 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 lpm-pol 1 Apr 21 03:52:03.682701 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 03:52:03.943128 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 03:52:03.997517 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 21 03:52:04.011004 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 21 03:52:04.017547 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 21 03:52:04.036771 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 21 03:52:04.037064 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 21 03:52:04.046578 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 21 03:52:04.076493 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 21 03:52:04.078357 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 21 03:52:04.078532 kernel: ata3.00: LPM support broken, forcing max_power Apr 21 03:52:04.080734 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 21 03:52:04.080918 kernel: ata3.00: applying bridge limits Apr 21 03:52:04.081998 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 21 03:52:04.090769 kernel: ata3.00: LPM support broken, forcing max_power Apr 21 03:52:04.090796 kernel: ata3.00: configured for UDMA/100 Apr 21 03:52:04.090821 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 21 03:52:04.091087 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 03:52:04.206332 disk-uuid[641]: Primary Header is updated. Apr 21 03:52:04.206332 disk-uuid[641]: Secondary Entries is updated. Apr 21 03:52:04.206332 disk-uuid[641]: Secondary Header is updated. Apr 21 03:52:04.220313 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 03:52:04.364625 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 21 03:52:04.364949 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 21 03:52:04.420659 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 21 03:52:05.101136 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 03:52:05.133983 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 03:52:05.156260 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 03:52:05.162888 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 03:52:05.171734 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 03:52:05.298190 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 21 03:52:05.298173 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 03:52:05.302065 disk-uuid[642]: The operation has completed successfully. Apr 21 03:52:05.437419 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 03:52:05.437625 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 03:52:05.570876 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 03:52:05.597883 sh[671]: Success Apr 21 03:52:05.673763 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 03:52:05.680144 kernel: device-mapper: uevent: version 1.0.3 Apr 21 03:52:05.681455 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Apr 21 03:52:05.740351 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Apr 21 03:52:05.941663 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 03:52:06.025062 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 03:52:06.059436 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 03:52:06.073511 kernel: BTRFS: device fsid f0ffb5f7-32a8-4c02-8f56-14d7d8f0dab5 devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (683) Apr 21 03:52:06.080874 kernel: BTRFS info (device dm-0): first mount of filesystem f0ffb5f7-32a8-4c02-8f56-14d7d8f0dab5 Apr 21 03:52:06.081426 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 03:52:06.148358 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Apr 21 03:52:06.148701 kernel: BTRFS info (device dm-0 state E): enabling free space tree Apr 21 03:52:06.185868 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 03:52:06.188240 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Apr 21 03:52:06.193052 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 03:52:06.195688 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 03:52:06.200504 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 03:52:06.343403 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (714) Apr 21 03:52:06.350632 kernel: BTRFS info (device vda6): first mount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 03:52:06.350975 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 03:52:06.380255 kernel: BTRFS info (device vda6): turning on async discard Apr 21 03:52:06.380589 kernel: BTRFS info (device vda6): enabling free space tree Apr 21 03:52:06.422409 kernel: BTRFS info (device vda6): last unmount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 03:52:06.432410 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 03:52:06.456812 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 03:52:06.883902 ignition[756]: Ignition 2.22.0 Apr 21 03:52:06.884332 ignition[756]: Stage: fetch-offline Apr 21 03:52:06.884601 ignition[756]: no configs at "/usr/lib/ignition/base.d" Apr 21 03:52:06.884611 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 03:52:06.885521 ignition[756]: parsed url from cmdline: "" Apr 21 03:52:06.885532 ignition[756]: no config URL provided Apr 21 03:52:06.885588 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 03:52:06.885622 ignition[756]: no config at "/usr/lib/ignition/user.ign" Apr 21 03:52:06.885679 ignition[756]: op(1): [started] loading QEMU firmware config module Apr 21 03:52:06.885685 ignition[756]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 21 03:52:06.907532 ignition[756]: op(1): [finished] loading QEMU firmware config module Apr 21 03:52:06.913990 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 03:52:06.966312 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 03:52:07.131298 systemd-networkd[860]: lo: Link UP Apr 21 03:52:07.131320 systemd-networkd[860]: lo: Gained carrier Apr 21 03:52:07.135931 systemd-networkd[860]: Enumeration completed Apr 21 03:52:07.136463 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 03:52:07.145661 systemd-networkd[860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 03:52:07.145668 systemd-networkd[860]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 03:52:07.145963 systemd[1]: Reached target network.target - Network. Apr 21 03:52:07.146739 systemd-networkd[860]: eth0: Link UP Apr 21 03:52:07.171674 systemd-networkd[860]: eth0: Gained carrier Apr 21 03:52:07.171696 systemd-networkd[860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 03:52:07.216845 systemd-networkd[860]: eth0: DHCPv4 address 10.0.0.123/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 03:52:07.231496 ignition[756]: parsing config with SHA512: 26d13a2b3d0625094fb32ed87fbc0d68675862c856cc8db07b2210f8d44b8032f7c29a15da65e8516fff0ab34d64178c304b10928287c9b524f4e89bc35a7576 Apr 21 03:52:07.256989 unknown[756]: fetched base config from "system" Apr 21 03:52:07.257255 unknown[756]: fetched user config from "qemu" Apr 21 03:52:07.257981 ignition[756]: fetch-offline: fetch-offline passed Apr 21 03:52:07.263780 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 03:52:07.258124 ignition[756]: Ignition finished successfully Apr 21 03:52:07.265054 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 21 03:52:07.267543 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 03:52:07.416046 ignition[865]: Ignition 2.22.0 Apr 21 03:52:07.416454 ignition[865]: Stage: kargs Apr 21 03:52:07.417523 systemd-resolved[239]: Detected conflict on linux IN A 10.0.0.123 Apr 21 03:52:07.417456 ignition[865]: no configs at "/usr/lib/ignition/base.d" Apr 21 03:52:07.417535 systemd-resolved[239]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Apr 21 03:52:07.417518 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 03:52:07.428895 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 03:52:07.422275 ignition[865]: kargs: kargs passed Apr 21 03:52:07.422482 ignition[865]: Ignition finished successfully Apr 21 03:52:07.463056 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 03:52:07.661662 ignition[873]: Ignition 2.22.0 Apr 21 03:52:07.662336 ignition[873]: Stage: disks Apr 21 03:52:07.667716 ignition[873]: no configs at "/usr/lib/ignition/base.d" Apr 21 03:52:07.667731 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 03:52:07.682377 ignition[873]: disks: disks passed Apr 21 03:52:07.685442 ignition[873]: Ignition finished successfully Apr 21 03:52:07.691851 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 03:52:07.695365 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 03:52:07.700983 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 03:52:07.725001 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 03:52:07.729898 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 03:52:07.733922 systemd[1]: Reached target basic.target - Basic System. Apr 21 03:52:07.742723 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 03:52:07.843840 systemd-fsck[883]: ROOT: clean, 15/553520 files, 52789/553472 blocks Apr 21 03:52:07.877005 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 03:52:07.884012 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 03:52:08.295695 kernel: EXT4-fs (vda9): mounted filesystem 146ef5ea-4935-456e-a7a6-cf0210fee567 r/w with ordered data mode. Quota mode: none. Apr 21 03:52:08.300398 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 03:52:08.304005 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 03:52:08.308681 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 03:52:08.314935 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 03:52:08.315901 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 03:52:08.316035 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 03:52:08.316086 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 03:52:08.368256 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (891) Apr 21 03:52:08.368450 kernel: BTRFS info (device vda6): first mount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 03:52:08.368467 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 03:52:08.368983 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 03:52:08.390717 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 03:52:08.423670 kernel: BTRFS info (device vda6): turning on async discard Apr 21 03:52:08.438550 kernel: BTRFS info (device vda6): enabling free space tree Apr 21 03:52:08.436362 systemd-networkd[860]: eth0: Gained IPv6LL Apr 21 03:52:08.444704 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 03:52:08.643931 initrd-setup-root[915]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 03:52:08.665503 initrd-setup-root[922]: cut: /sysroot/etc/group: No such file or directory Apr 21 03:52:08.711787 initrd-setup-root[929]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 03:52:08.726436 initrd-setup-root[936]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 03:52:09.323758 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 03:52:09.329915 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 03:52:09.332845 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 03:52:09.394479 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 03:52:09.397301 kernel: BTRFS info (device vda6): last unmount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 03:52:09.489840 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 03:52:09.528819 ignition[1004]: INFO : Ignition 2.22.0 Apr 21 03:52:09.528819 ignition[1004]: INFO : Stage: mount Apr 21 03:52:09.546467 ignition[1004]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 03:52:09.546467 ignition[1004]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 03:52:09.546467 ignition[1004]: INFO : mount: mount passed Apr 21 03:52:09.554811 ignition[1004]: INFO : Ignition finished successfully Apr 21 03:52:09.551935 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 03:52:09.556801 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 03:52:09.661573 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 03:52:09.732291 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1017) Apr 21 03:52:09.739260 kernel: BTRFS info (device vda6): first mount of filesystem c8a1e0fd-1038-473a-a82d-f70d62b109dc Apr 21 03:52:09.739610 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 21 03:52:09.780238 kernel: BTRFS info (device vda6): turning on async discard Apr 21 03:52:09.780864 kernel: BTRFS info (device vda6): enabling free space tree Apr 21 03:52:09.786951 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 03:52:09.927750 ignition[1034]: INFO : Ignition 2.22.0 Apr 21 03:52:09.930924 ignition[1034]: INFO : Stage: files Apr 21 03:52:09.932504 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 03:52:09.932504 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 03:52:09.948784 ignition[1034]: DEBUG : files: compiled without relabeling support, skipping Apr 21 03:52:09.958668 ignition[1034]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 03:52:09.958668 ignition[1034]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 03:52:09.971944 ignition[1034]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 03:52:09.975800 ignition[1034]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 03:52:10.014442 unknown[1034]: wrote ssh authorized keys file for user: core Apr 21 03:52:10.025627 ignition[1034]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 03:52:10.067865 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 03:52:10.072662 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 03:52:10.185777 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 03:52:10.435307 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 03:52:10.435307 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 03:52:10.489450 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 03:52:10.489450 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 03:52:10.489450 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 03:52:10.489450 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 03:52:10.489450 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 03:52:10.489450 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 03:52:10.489450 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 03:52:10.489450 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 03:52:10.577956 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 03:52:10.577956 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 03:52:10.577956 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 03:52:10.577956 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 03:52:10.577956 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 21 03:52:10.982142 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 03:52:13.219961 ignition[1034]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 21 03:52:13.219961 ignition[1034]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 03:52:13.272197 ignition[1034]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 03:52:13.277522 ignition[1034]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 03:52:13.277522 ignition[1034]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 03:52:13.277522 ignition[1034]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 21 03:52:13.277522 ignition[1034]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 03:52:13.277522 ignition[1034]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 21 03:52:13.277522 ignition[1034]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 21 03:52:13.277522 ignition[1034]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 21 03:52:13.521622 ignition[1034]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 03:52:13.538257 ignition[1034]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 21 03:52:13.542756 ignition[1034]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 21 03:52:13.546037 ignition[1034]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 21 03:52:13.546037 ignition[1034]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 03:52:13.569721 ignition[1034]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 03:52:13.574956 ignition[1034]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 03:52:13.574956 ignition[1034]: INFO : files: files passed Apr 21 03:52:13.574956 ignition[1034]: INFO : Ignition finished successfully Apr 21 03:52:13.599002 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 03:52:13.669822 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 03:52:13.689182 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 03:52:13.696086 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 03:52:13.696269 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 03:52:13.809472 initrd-setup-root-after-ignition[1063]: grep: /sysroot/oem/oem-release: No such file or directory Apr 21 03:52:13.821949 initrd-setup-root-after-ignition[1065]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 03:52:13.821949 initrd-setup-root-after-ignition[1065]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 03:52:13.828927 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 03:52:13.831108 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 03:52:13.836112 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 03:52:13.841962 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 03:52:14.035550 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 03:52:14.036263 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 03:52:14.037481 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 03:52:14.037771 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 03:52:14.045074 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 03:52:14.104853 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 03:52:14.189487 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 03:52:14.236042 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 03:52:14.304863 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 03:52:14.316558 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 03:52:14.318639 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 03:52:14.321339 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 03:52:14.321546 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 03:52:14.327256 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 03:52:14.329975 systemd[1]: Stopped target basic.target - Basic System. Apr 21 03:52:14.334386 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 03:52:14.337102 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 03:52:14.342981 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 03:52:14.347921 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Apr 21 03:52:14.365051 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 03:52:14.369487 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 03:52:14.370336 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 03:52:14.375509 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 03:52:14.377025 systemd[1]: Stopped target swap.target - Swaps. Apr 21 03:52:14.384588 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 03:52:14.385133 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 03:52:14.392180 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 03:52:14.393067 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 03:52:14.402128 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 03:52:14.427442 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 03:52:14.435046 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 03:52:14.435444 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 03:52:14.439594 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 03:52:14.439795 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 03:52:14.443771 systemd[1]: Stopped target paths.target - Path Units. Apr 21 03:52:14.447050 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 03:52:14.449289 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 03:52:14.453899 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 03:52:14.461511 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 03:52:14.495119 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 03:52:14.495780 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 03:52:14.498392 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 03:52:14.498660 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 03:52:14.504295 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 03:52:14.504455 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 03:52:14.526568 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 03:52:14.537001 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 03:52:14.547317 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 03:52:14.551074 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 03:52:14.551941 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 03:52:14.555414 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 03:52:14.562871 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 03:52:14.563308 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 03:52:14.563826 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 03:52:14.563964 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 03:52:14.615131 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 03:52:14.615906 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 03:52:14.676244 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 03:52:14.679134 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 03:52:14.679727 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 03:52:14.706466 ignition[1089]: INFO : Ignition 2.22.0 Apr 21 03:52:14.729033 ignition[1089]: INFO : Stage: umount Apr 21 03:52:14.729033 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 03:52:14.729033 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 21 03:52:14.740753 ignition[1089]: INFO : umount: umount passed Apr 21 03:52:14.740753 ignition[1089]: INFO : Ignition finished successfully Apr 21 03:52:14.747766 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 03:52:14.790655 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 03:52:14.792809 systemd[1]: Stopped target network.target - Network. Apr 21 03:52:14.795759 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 03:52:14.795902 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 03:52:14.800199 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 03:52:14.800493 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 03:52:14.803891 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 03:52:14.803988 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 03:52:14.809290 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 03:52:14.809529 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 03:52:14.813931 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 03:52:14.814074 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 03:52:14.817112 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 03:52:14.823526 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 03:52:14.846437 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 03:52:14.847007 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 03:52:14.860883 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 21 03:52:14.861049 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 03:52:14.862322 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 03:52:14.872845 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 21 03:52:14.874361 systemd[1]: Stopped target network-pre.target - Preparation for Network. Apr 21 03:52:14.877263 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 03:52:14.877378 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 03:52:14.888363 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 03:52:14.889521 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 03:52:14.889720 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 03:52:14.893750 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 03:52:14.894349 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 03:52:14.918090 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 03:52:14.918728 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 03:52:14.929411 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 03:52:14.942827 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 03:52:14.976815 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 03:52:14.985805 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 21 03:52:14.985869 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 21 03:52:15.002548 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 03:52:15.002799 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 03:52:15.040644 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 03:52:15.042241 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 03:52:15.046742 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 03:52:15.046806 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 03:52:15.059821 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 03:52:15.060041 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 03:52:15.065745 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 03:52:15.072995 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 03:52:15.079494 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 03:52:15.079699 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 03:52:15.133543 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 03:52:15.135624 systemd[1]: systemd-network-generator.service: Deactivated successfully. Apr 21 03:52:15.135706 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 03:52:15.145055 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 03:52:15.145115 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 03:52:15.179741 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 03:52:15.179851 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 03:52:15.189599 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Apr 21 03:52:15.190038 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 21 03:52:15.190838 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 21 03:52:15.192746 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 03:52:15.193086 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 03:52:15.243027 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 03:52:15.243201 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 03:52:15.249101 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 03:52:15.273838 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 03:52:15.334771 systemd[1]: Switching root. Apr 21 03:52:15.423526 systemd-journald[202]: Journal stopped Apr 21 03:52:19.789966 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Apr 21 03:52:19.790516 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 03:52:19.790656 kernel: SELinux: policy capability open_perms=1 Apr 21 03:52:19.790668 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 03:52:19.790679 kernel: SELinux: policy capability always_check_network=0 Apr 21 03:52:19.790691 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 03:52:19.790702 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 03:52:19.790709 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 03:52:19.790741 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 03:52:19.790771 kernel: SELinux: policy capability userspace_initial_context=0 Apr 21 03:52:19.790780 kernel: audit: type=1403 audit(1776743535.903:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 03:52:19.790804 systemd[1]: Successfully loaded SELinux policy in 166.830ms. Apr 21 03:52:19.790818 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 62.246ms. Apr 21 03:52:19.790828 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 21 03:52:19.790837 systemd[1]: Detected virtualization kvm. Apr 21 03:52:19.790845 systemd[1]: Detected architecture x86-64. Apr 21 03:52:19.790856 systemd[1]: Detected first boot. Apr 21 03:52:19.790864 systemd[1]: Initializing machine ID from VM UUID. Apr 21 03:52:19.790875 zram_generator::config[1134]: No configuration found. Apr 21 03:52:19.790884 kernel: Guest personality initialized and is inactive Apr 21 03:52:19.790892 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Apr 21 03:52:19.790899 kernel: Initialized host personality Apr 21 03:52:19.790906 kernel: NET: Registered PF_VSOCK protocol family Apr 21 03:52:19.790914 systemd[1]: Populated /etc with preset unit settings. Apr 21 03:52:19.790925 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 21 03:52:19.790933 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 03:52:19.790944 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 03:52:19.790952 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 03:52:19.790961 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 03:52:19.790971 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 03:52:19.790978 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 03:52:19.790986 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 03:52:19.790995 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 03:52:19.791002 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 03:52:19.791010 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 03:52:19.791021 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 03:52:19.791030 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 03:52:19.791038 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 03:52:19.791046 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 03:52:19.791054 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 03:52:19.791062 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 03:52:19.791069 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 03:52:19.791080 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 03:52:19.791088 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 03:52:19.791096 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 03:52:19.791104 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 03:52:19.791112 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 03:52:19.791120 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 03:52:19.791128 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 03:52:19.791135 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 03:52:19.791144 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 03:52:19.791275 systemd[1]: Reached target slices.target - Slice Units. Apr 21 03:52:19.791286 systemd[1]: Reached target swap.target - Swaps. Apr 21 03:52:19.791296 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 03:52:19.791304 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 03:52:19.791312 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 21 03:52:19.791320 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 03:52:19.791331 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 03:52:19.791339 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 03:52:19.791347 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 03:52:19.791355 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 03:52:19.791365 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 03:52:19.791373 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 03:52:19.791381 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 03:52:19.791389 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 03:52:19.791397 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 03:52:19.791411 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 03:52:19.791425 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 03:52:19.791438 systemd[1]: Reached target machines.target - Containers. Apr 21 03:52:19.791453 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 03:52:19.791467 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 03:52:19.791474 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 03:52:19.791483 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 03:52:19.791490 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 03:52:19.791499 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 03:52:19.791507 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 03:52:19.791515 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 03:52:19.791525 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 03:52:19.791534 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 03:52:19.791541 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 03:52:19.791549 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 03:52:19.791557 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 03:52:19.791565 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 03:52:19.791574 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 21 03:52:19.791582 kernel: loop: module loaded Apr 21 03:52:19.791590 kernel: fuse: init (API version 7.41) Apr 21 03:52:19.791599 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 03:52:19.791606 kernel: ACPI: bus type drm_connector registered Apr 21 03:52:19.791617 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 03:52:19.791625 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 03:52:19.791633 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 03:52:19.791641 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 21 03:52:19.791649 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 03:52:19.791708 systemd-journald[1219]: Collecting audit messages is disabled. Apr 21 03:52:19.791726 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 03:52:19.791737 systemd[1]: Stopped verity-setup.service. Apr 21 03:52:19.791745 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 03:52:19.791754 systemd-journald[1219]: Journal started Apr 21 03:52:19.791772 systemd-journald[1219]: Runtime Journal (/run/log/journal/cca566dc6a334942818e3fd9b58aed66) is 6M, max 48.2M, 42.2M free. Apr 21 03:52:18.661015 systemd[1]: Queued start job for default target multi-user.target. Apr 21 03:52:18.738869 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 21 03:52:18.757623 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 03:52:18.759864 systemd[1]: systemd-journald.service: Consumed 1.831s CPU time. Apr 21 03:52:19.803274 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 03:52:19.808038 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 03:52:19.813599 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 03:52:19.816109 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 03:52:19.818031 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 03:52:19.821972 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 03:52:19.826049 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 03:52:19.828392 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 03:52:19.831332 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 03:52:19.836618 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 03:52:19.837949 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 03:52:19.843992 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 03:52:19.847465 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 03:52:19.880496 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 03:52:19.881658 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 03:52:19.888844 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 03:52:19.891304 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 03:52:19.897116 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 03:52:19.918768 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 03:52:19.929753 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 03:52:19.930882 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 03:52:19.955398 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 03:52:19.958324 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 03:52:19.964097 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 03:52:19.969671 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 21 03:52:20.037004 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 03:52:20.042791 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 03:52:20.066488 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 03:52:20.068761 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 03:52:20.068828 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 03:52:20.073706 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 21 03:52:20.080894 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 03:52:20.084195 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 03:52:20.085882 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 03:52:20.100491 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 03:52:20.104063 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 03:52:20.132017 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 03:52:20.176401 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 03:52:20.183777 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 03:52:20.190800 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 03:52:20.204509 systemd-journald[1219]: Time spent on flushing to /var/log/journal/cca566dc6a334942818e3fd9b58aed66 is 69.258ms for 985 entries. Apr 21 03:52:20.204509 systemd-journald[1219]: System Journal (/var/log/journal/cca566dc6a334942818e3fd9b58aed66) is 8M, max 195.6M, 187.6M free. Apr 21 03:52:20.337710 systemd-journald[1219]: Received client request to flush runtime journal. Apr 21 03:52:20.337782 kernel: loop0: detected capacity change from 0 to 217752 Apr 21 03:52:20.251748 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 03:52:20.259918 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 03:52:20.262784 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 03:52:20.265821 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 03:52:20.268292 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 03:52:20.319186 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 03:52:20.333769 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 21 03:52:20.375432 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 03:52:20.409050 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 03:52:20.429486 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 03:52:20.529846 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 03:52:20.536121 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 21 03:52:20.541526 kernel: loop1: detected capacity change from 0 to 128560 Apr 21 03:52:20.559401 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 03:52:20.576367 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 03:52:20.710445 kernel: loop2: detected capacity change from 0 to 110984 Apr 21 03:52:20.841665 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Apr 21 03:52:20.842894 systemd-tmpfiles[1272]: ACLs are not supported, ignoring. Apr 21 03:52:20.900795 kernel: loop3: detected capacity change from 0 to 217752 Apr 21 03:52:20.893609 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 03:52:21.002272 kernel: loop4: detected capacity change from 0 to 128560 Apr 21 03:52:21.127731 kernel: loop5: detected capacity change from 0 to 110984 Apr 21 03:52:21.239947 (sd-merge)[1277]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 21 03:52:21.241051 (sd-merge)[1277]: Merged extensions into '/usr'. Apr 21 03:52:21.253401 systemd[1]: Reload requested from client PID 1253 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 03:52:21.260740 systemd[1]: Reloading... Apr 21 03:52:21.735646 zram_generator::config[1304]: No configuration found. Apr 21 03:52:22.666269 ldconfig[1248]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 03:52:22.976707 systemd[1]: Reloading finished in 1688 ms. Apr 21 03:52:23.045772 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 03:52:23.092555 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 03:52:23.100903 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 03:52:23.220627 systemd[1]: Starting ensure-sysext.service... Apr 21 03:52:23.232370 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 03:52:23.258631 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 03:52:23.301882 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Apr 21 03:52:23.302602 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Apr 21 03:52:23.303882 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 03:52:23.304677 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 03:52:23.307345 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 03:52:23.310574 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Apr 21 03:52:23.312576 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Apr 21 03:52:23.347795 systemd[1]: Reload requested from client PID 1341 ('systemctl') (unit ensure-sysext.service)... Apr 21 03:52:23.349179 systemd[1]: Reloading... Apr 21 03:52:23.386823 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 03:52:23.386910 systemd-tmpfiles[1342]: Skipping /boot Apr 21 03:52:23.425828 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 03:52:23.426491 systemd-tmpfiles[1342]: Skipping /boot Apr 21 03:52:23.514970 systemd-udevd[1344]: Using default interface naming scheme 'v255'. Apr 21 03:52:23.703209 zram_generator::config[1373]: No configuration found. Apr 21 03:52:24.713573 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 03:52:24.763485 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 21 03:52:24.774053 kernel: ACPI: button: Power Button [PWRF] Apr 21 03:52:24.899095 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 21 03:52:24.900566 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 21 03:52:24.909102 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 03:52:24.941599 systemd[1]: Reloading finished in 1560 ms. Apr 21 03:52:25.015912 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 03:52:25.118932 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 03:52:25.177994 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 21 03:52:25.419394 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 03:52:25.426888 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 21 03:52:25.440020 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 03:52:25.447139 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 03:52:25.470567 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 03:52:25.511078 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 03:52:25.527967 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 03:52:25.530762 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 03:52:25.570123 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 03:52:25.573018 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 21 03:52:25.621922 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 03:52:25.688785 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 03:52:25.701780 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 03:52:25.714917 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 03:52:25.728589 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 03:52:25.735656 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 03:52:25.735822 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 03:52:25.812006 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 03:52:25.813789 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 03:52:25.820806 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 03:52:25.821798 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 03:52:25.865046 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 03:52:25.919438 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 03:52:26.066333 augenrules[1493]: No rules Apr 21 03:52:26.139619 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 03:52:26.176972 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 21 03:52:26.263042 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 03:52:26.383416 systemd[1]: Finished ensure-sysext.service. Apr 21 03:52:26.393026 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 03:52:26.393712 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 03:52:26.399710 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 03:52:26.407549 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 03:52:26.466525 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 03:52:26.486398 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 03:52:26.489761 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 03:52:26.490863 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 21 03:52:26.534757 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 21 03:52:26.541358 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 03:52:26.551604 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 03:52:26.598545 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 03:52:26.601699 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 03:52:26.613716 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 03:52:26.664542 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 03:52:26.667445 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 03:52:26.672922 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 03:52:26.681937 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 03:52:26.686354 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 03:52:26.730631 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 03:52:26.743795 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 03:52:26.746022 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 03:52:26.771691 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 03:52:26.820590 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 03:52:26.821650 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 03:52:26.822130 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 03:52:27.027023 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 03:52:27.471732 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 03:52:27.719242 systemd-networkd[1475]: lo: Link UP Apr 21 03:52:27.719273 systemd-networkd[1475]: lo: Gained carrier Apr 21 03:52:27.729939 systemd-networkd[1475]: Enumeration completed Apr 21 03:52:27.730462 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 03:52:27.733421 systemd-networkd[1475]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 03:52:27.733528 systemd-networkd[1475]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 03:52:27.735611 systemd-networkd[1475]: eth0: Link UP Apr 21 03:52:27.735765 systemd-networkd[1475]: eth0: Gained carrier Apr 21 03:52:27.736479 systemd-networkd[1475]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 03:52:27.742029 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 21 03:52:27.829538 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 03:52:27.852733 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 21 03:52:27.857691 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 03:52:27.858528 systemd-resolved[1478]: Positive Trust Anchors: Apr 21 03:52:27.858670 systemd-resolved[1478]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 03:52:27.858721 systemd-resolved[1478]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 03:52:27.859778 systemd-networkd[1475]: eth0: DHCPv4 address 10.0.0.123/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 21 03:52:27.865072 systemd-timesyncd[1506]: Network configuration changed, trying to establish connection. Apr 21 03:52:27.869450 systemd-timesyncd[1506]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 21 03:52:27.870077 systemd-timesyncd[1506]: Initial clock synchronization to Tue 2026-04-21 03:52:27.900745 UTC. Apr 21 03:52:27.871614 systemd-resolved[1478]: Defaulting to hostname 'linux'. Apr 21 03:52:27.878205 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 03:52:27.883236 systemd[1]: Reached target network.target - Network. Apr 21 03:52:27.886756 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 03:52:27.890012 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 03:52:27.893590 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 03:52:27.897572 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 03:52:27.901013 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Apr 21 03:52:27.904406 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 03:52:27.907354 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 03:52:27.912805 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 03:52:27.917931 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 03:52:27.918778 systemd[1]: Reached target paths.target - Path Units. Apr 21 03:52:27.927057 systemd[1]: Reached target timers.target - Timer Units. Apr 21 03:52:28.002884 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 03:52:28.015139 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 03:52:28.023786 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 21 03:52:28.028761 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 21 03:52:28.032075 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 21 03:52:28.047859 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 03:52:28.053989 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 21 03:52:28.061575 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 21 03:52:28.068279 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 03:52:28.092758 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 03:52:28.096908 systemd[1]: Reached target basic.target - Basic System. Apr 21 03:52:28.124781 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 03:52:28.128403 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 03:52:28.134702 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 03:52:28.183147 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 03:52:28.193293 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 03:52:28.201541 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 03:52:28.207140 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 03:52:28.230427 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 03:52:28.315196 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Apr 21 03:52:28.323536 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 03:52:28.323900 jq[1539]: false Apr 21 03:52:28.327550 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 03:52:28.333683 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 03:52:28.349128 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 03:52:28.385486 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Refreshing passwd entry cache Apr 21 03:52:28.386345 oslogin_cache_refresh[1541]: Refreshing passwd entry cache Apr 21 03:52:28.389102 extend-filesystems[1540]: Found /dev/vda6 Apr 21 03:52:28.434346 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Failure getting users, quitting Apr 21 03:52:28.434346 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 21 03:52:28.434346 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Refreshing group entry cache Apr 21 03:52:28.430813 oslogin_cache_refresh[1541]: Failure getting users, quitting Apr 21 03:52:28.435056 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 03:52:28.430899 oslogin_cache_refresh[1541]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Apr 21 03:52:28.431077 oslogin_cache_refresh[1541]: Refreshing group entry cache Apr 21 03:52:28.448401 extend-filesystems[1540]: Found /dev/vda9 Apr 21 03:52:28.460673 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 03:52:28.460819 extend-filesystems[1540]: Checking size of /dev/vda9 Apr 21 03:52:28.465128 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 03:52:28.470195 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 03:52:28.479448 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Failure getting groups, quitting Apr 21 03:52:28.478444 oslogin_cache_refresh[1541]: Failure getting groups, quitting Apr 21 03:52:28.479606 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 03:52:28.485051 google_oslogin_nss_cache[1541]: oslogin_cache_refresh[1541]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 21 03:52:28.485029 oslogin_cache_refresh[1541]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Apr 21 03:52:28.486033 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 03:52:28.488662 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 03:52:28.488950 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 03:52:28.492874 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Apr 21 03:52:28.493049 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Apr 21 03:52:28.500709 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 03:52:28.537030 extend-filesystems[1540]: Resized partition /dev/vda9 Apr 21 03:52:28.534424 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 03:52:28.567799 extend-filesystems[1569]: resize2fs 1.47.3 (8-Jul-2025) Apr 21 03:52:28.572566 jq[1560]: true Apr 21 03:52:28.583203 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 21 03:52:28.588860 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 03:52:28.592798 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 03:52:28.679396 jq[1577]: true Apr 21 03:52:28.691480 update_engine[1556]: I20260421 03:52:28.689906 1556 main.cc:92] Flatcar Update Engine starting Apr 21 03:52:28.768224 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 21 03:52:28.839304 extend-filesystems[1569]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 21 03:52:28.839304 extend-filesystems[1569]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 21 03:52:28.839304 extend-filesystems[1569]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 21 03:52:28.938017 extend-filesystems[1540]: Resized filesystem in /dev/vda9 Apr 21 03:52:28.942883 tar[1564]: linux-amd64/LICENSE Apr 21 03:52:28.942883 tar[1564]: linux-amd64/helm Apr 21 03:52:28.840029 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 03:52:28.844823 (ntainerd)[1583]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 03:52:28.845648 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 03:52:29.005231 dbus-daemon[1537]: [system] SELinux support is enabled Apr 21 03:52:29.024546 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 03:52:29.100708 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 03:52:29.101077 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 03:52:29.105026 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 03:52:29.105059 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 03:52:29.147247 systemd-networkd[1475]: eth0: Gained IPv6LL Apr 21 03:52:29.230632 update_engine[1556]: I20260421 03:52:29.185600 1556 update_check_scheduler.cc:74] Next update check in 3m59s Apr 21 03:52:29.346974 systemd[1]: Started update-engine.service - Update Engine. Apr 21 03:52:29.396593 systemd-logind[1551]: Watching system buttons on /dev/input/event2 (Power Button) Apr 21 03:52:29.396741 systemd-logind[1551]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 03:52:29.482493 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 03:52:29.556137 bash[1602]: Updated "/home/core/.ssh/authorized_keys" Apr 21 03:52:29.484048 systemd-logind[1551]: New seat seat0. Apr 21 03:52:29.547627 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 03:52:29.644031 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 03:52:29.784025 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 03:52:29.898076 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 21 03:52:29.920391 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 03:52:30.077968 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 03:52:30.086709 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 21 03:52:30.120472 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 03:52:30.292901 sshd_keygen[1558]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 03:52:30.595316 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 03:52:30.651373 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 03:52:30.673722 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 03:52:30.683969 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 21 03:52:30.687373 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 21 03:52:30.689827 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 03:52:30.692964 locksmithd[1604]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 03:52:30.771044 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 03:52:30.771456 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 03:52:30.802617 containerd[1583]: time="2026-04-21T03:52:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Apr 21 03:52:30.802617 containerd[1583]: time="2026-04-21T03:52:30.777224636Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Apr 21 03:52:30.809078 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 03:52:30.887343 containerd[1583]: time="2026-04-21T03:52:30.886872375Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="44.696µs" Apr 21 03:52:30.887831 containerd[1583]: time="2026-04-21T03:52:30.887755840Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Apr 21 03:52:30.887938 containerd[1583]: time="2026-04-21T03:52:30.887923288Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Apr 21 03:52:30.888701 containerd[1583]: time="2026-04-21T03:52:30.888543225Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Apr 21 03:52:30.889016 containerd[1583]: time="2026-04-21T03:52:30.889003632Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Apr 21 03:52:30.889069 containerd[1583]: time="2026-04-21T03:52:30.889062001Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 21 03:52:30.889196 containerd[1583]: time="2026-04-21T03:52:30.889186413Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Apr 21 03:52:30.889226 containerd[1583]: time="2026-04-21T03:52:30.889220900Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 21 03:52:30.889796 containerd[1583]: time="2026-04-21T03:52:30.889776964Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Apr 21 03:52:30.889936 containerd[1583]: time="2026-04-21T03:52:30.889924353Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 21 03:52:30.889981 containerd[1583]: time="2026-04-21T03:52:30.889973159Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Apr 21 03:52:30.890007 containerd[1583]: time="2026-04-21T03:52:30.890001816Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Apr 21 03:52:30.890103 containerd[1583]: time="2026-04-21T03:52:30.890093175Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Apr 21 03:52:30.890410 containerd[1583]: time="2026-04-21T03:52:30.890396961Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 21 03:52:30.890566 containerd[1583]: time="2026-04-21T03:52:30.890547077Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Apr 21 03:52:30.890614 containerd[1583]: time="2026-04-21T03:52:30.890605962Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Apr 21 03:52:30.891106 containerd[1583]: time="2026-04-21T03:52:30.890937093Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Apr 21 03:52:30.892960 containerd[1583]: time="2026-04-21T03:52:30.892787712Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Apr 21 03:52:30.896604 containerd[1583]: time="2026-04-21T03:52:30.896405793Z" level=info msg="metadata content store policy set" policy=shared Apr 21 03:52:30.910961 containerd[1583]: time="2026-04-21T03:52:30.910848975Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Apr 21 03:52:30.912125 containerd[1583]: time="2026-04-21T03:52:30.911961528Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Apr 21 03:52:30.918116 containerd[1583]: time="2026-04-21T03:52:30.917823835Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Apr 21 03:52:30.921466 containerd[1583]: time="2026-04-21T03:52:30.919917579Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Apr 21 03:52:30.921466 containerd[1583]: time="2026-04-21T03:52:30.920863795Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Apr 21 03:52:30.921466 containerd[1583]: time="2026-04-21T03:52:30.920970325Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Apr 21 03:52:30.921466 containerd[1583]: time="2026-04-21T03:52:30.920998041Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Apr 21 03:52:30.921466 containerd[1583]: time="2026-04-21T03:52:30.921030659Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Apr 21 03:52:30.921466 containerd[1583]: time="2026-04-21T03:52:30.921047641Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Apr 21 03:52:30.921466 containerd[1583]: time="2026-04-21T03:52:30.921073173Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Apr 21 03:52:30.921466 containerd[1583]: time="2026-04-21T03:52:30.921129961Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Apr 21 03:52:30.921466 containerd[1583]: time="2026-04-21T03:52:30.921452502Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Apr 21 03:52:30.925560 containerd[1583]: time="2026-04-21T03:52:30.923808240Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Apr 21 03:52:30.925560 containerd[1583]: time="2026-04-21T03:52:30.924758326Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Apr 21 03:52:30.925560 containerd[1583]: time="2026-04-21T03:52:30.925326827Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Apr 21 03:52:30.925560 containerd[1583]: time="2026-04-21T03:52:30.925432354Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Apr 21 03:52:30.925560 containerd[1583]: time="2026-04-21T03:52:30.925483484Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Apr 21 03:52:30.925560 containerd[1583]: time="2026-04-21T03:52:30.925499177Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Apr 21 03:52:30.925560 containerd[1583]: time="2026-04-21T03:52:30.925516678Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Apr 21 03:52:30.925560 containerd[1583]: time="2026-04-21T03:52:30.925529752Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Apr 21 03:52:30.925560 containerd[1583]: time="2026-04-21T03:52:30.925562142Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Apr 21 03:52:30.925560 containerd[1583]: time="2026-04-21T03:52:30.925574711Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Apr 21 03:52:30.925560 containerd[1583]: time="2026-04-21T03:52:30.925597404Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Apr 21 03:52:30.925996 containerd[1583]: time="2026-04-21T03:52:30.925876353Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Apr 21 03:52:30.925996 containerd[1583]: time="2026-04-21T03:52:30.925912657Z" level=info msg="Start snapshots syncer" Apr 21 03:52:30.926026 containerd[1583]: time="2026-04-21T03:52:30.926015085Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Apr 21 03:52:30.928219 containerd[1583]: time="2026-04-21T03:52:30.927289618Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Apr 21 03:52:30.928219 containerd[1583]: time="2026-04-21T03:52:30.927526883Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Apr 21 03:52:30.929873 containerd[1583]: time="2026-04-21T03:52:30.928498998Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Apr 21 03:52:30.982031 containerd[1583]: time="2026-04-21T03:52:30.930553363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Apr 21 03:52:30.982031 containerd[1583]: time="2026-04-21T03:52:30.930830409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Apr 21 03:52:30.982031 containerd[1583]: time="2026-04-21T03:52:30.931011652Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Apr 21 03:52:30.982031 containerd[1583]: time="2026-04-21T03:52:30.931034242Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Apr 21 03:52:30.982031 containerd[1583]: time="2026-04-21T03:52:30.931086307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Apr 21 03:52:30.982031 containerd[1583]: time="2026-04-21T03:52:30.931106099Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Apr 21 03:52:30.985577 containerd[1583]: time="2026-04-21T03:52:30.934059367Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Apr 21 03:52:30.986741 containerd[1583]: time="2026-04-21T03:52:30.986456287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Apr 21 03:52:30.987599 containerd[1583]: time="2026-04-21T03:52:30.987533666Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Apr 21 03:52:30.988573 containerd[1583]: time="2026-04-21T03:52:30.988433133Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Apr 21 03:52:30.989635 containerd[1583]: time="2026-04-21T03:52:30.989137003Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 21 03:52:30.990765 containerd[1583]: time="2026-04-21T03:52:30.990541941Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Apr 21 03:52:30.991193 containerd[1583]: time="2026-04-21T03:52:30.991084194Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 21 03:52:30.991614 containerd[1583]: time="2026-04-21T03:52:30.991390524Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Apr 21 03:52:30.992717 containerd[1583]: time="2026-04-21T03:52:30.992425399Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Apr 21 03:52:30.993435 containerd[1583]: time="2026-04-21T03:52:30.993223760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Apr 21 03:52:30.994566 containerd[1583]: time="2026-04-21T03:52:30.994418823Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Apr 21 03:52:30.995732 containerd[1583]: time="2026-04-21T03:52:30.995447048Z" level=info msg="runtime interface created" Apr 21 03:52:30.997496 containerd[1583]: time="2026-04-21T03:52:30.996718791Z" level=info msg="created NRI interface" Apr 21 03:52:30.999491 containerd[1583]: time="2026-04-21T03:52:30.999104189Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Apr 21 03:52:31.000637 containerd[1583]: time="2026-04-21T03:52:31.000372394Z" level=info msg="Connect containerd service" Apr 21 03:52:31.002326 containerd[1583]: time="2026-04-21T03:52:31.001970596Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 03:52:31.007611 containerd[1583]: time="2026-04-21T03:52:31.007477210Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 03:52:31.007753 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 03:52:31.049655 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 03:52:31.060719 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 03:52:31.063139 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 03:52:31.450694 containerd[1583]: time="2026-04-21T03:52:31.450391222Z" level=info msg="Start subscribing containerd event" Apr 21 03:52:31.451319 containerd[1583]: time="2026-04-21T03:52:31.451255147Z" level=info msg="Start recovering state" Apr 21 03:52:31.452113 containerd[1583]: time="2026-04-21T03:52:31.451590094Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 03:52:31.452532 containerd[1583]: time="2026-04-21T03:52:31.452481406Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 03:52:31.452635 containerd[1583]: time="2026-04-21T03:52:31.452622718Z" level=info msg="Start event monitor" Apr 21 03:52:31.452727 containerd[1583]: time="2026-04-21T03:52:31.452710702Z" level=info msg="Start cni network conf syncer for default" Apr 21 03:52:31.452781 containerd[1583]: time="2026-04-21T03:52:31.452773079Z" level=info msg="Start streaming server" Apr 21 03:52:31.452821 containerd[1583]: time="2026-04-21T03:52:31.452815568Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Apr 21 03:52:31.452852 containerd[1583]: time="2026-04-21T03:52:31.452846414Z" level=info msg="runtime interface starting up..." Apr 21 03:52:31.452878 containerd[1583]: time="2026-04-21T03:52:31.452873351Z" level=info msg="starting plugins..." Apr 21 03:52:31.453110 containerd[1583]: time="2026-04-21T03:52:31.453042402Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Apr 21 03:52:31.454460 containerd[1583]: time="2026-04-21T03:52:31.454413882Z" level=info msg="containerd successfully booted in 0.682917s" Apr 21 03:52:31.454963 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 03:52:31.670242 tar[1564]: linux-amd64/README.md Apr 21 03:52:31.825674 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 03:52:33.983980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 03:52:33.992033 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 03:52:33.994688 systemd[1]: Startup finished in 8.030s (kernel) + 17.941s (initrd) + 18.259s (userspace) = 44.232s. Apr 21 03:52:34.148316 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 03:52:36.199722 kubelet[1674]: E0421 03:52:36.198098 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 03:52:36.208478 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 03:52:36.208980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 03:52:36.212596 systemd[1]: kubelet.service: Consumed 2.872s CPU time, 258.1M memory peak. Apr 21 03:52:38.101405 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 03:52:38.107653 systemd[1]: Started sshd@0-10.0.0.123:22-10.0.0.1:39696.service - OpenSSH per-connection server daemon (10.0.0.1:39696). Apr 21 03:52:38.413491 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 39696 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:52:38.427635 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:52:38.446072 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 03:52:38.448328 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 03:52:38.520995 systemd-logind[1551]: New session 1 of user core. Apr 21 03:52:38.588711 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 03:52:38.598837 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 03:52:38.634936 (systemd)[1692]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 03:52:38.654753 systemd-logind[1551]: New session c1 of user core. Apr 21 03:52:39.109934 systemd[1692]: Queued start job for default target default.target. Apr 21 03:52:39.130919 systemd[1692]: Created slice app.slice - User Application Slice. Apr 21 03:52:39.131243 systemd[1692]: Reached target paths.target - Paths. Apr 21 03:52:39.131976 systemd[1692]: Reached target timers.target - Timers. Apr 21 03:52:39.137608 systemd[1692]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 03:52:39.177404 systemd[1692]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 03:52:39.177642 systemd[1692]: Reached target sockets.target - Sockets. Apr 21 03:52:39.177718 systemd[1692]: Reached target basic.target - Basic System. Apr 21 03:52:39.177760 systemd[1692]: Reached target default.target - Main User Target. Apr 21 03:52:39.177791 systemd[1692]: Startup finished in 379ms. Apr 21 03:52:39.177930 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 03:52:39.193896 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 03:52:39.239783 systemd[1]: Started sshd@1-10.0.0.123:22-10.0.0.1:39702.service - OpenSSH per-connection server daemon (10.0.0.1:39702). Apr 21 03:52:39.536778 sshd[1703]: Accepted publickey for core from 10.0.0.1 port 39702 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:52:39.552393 sshd-session[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:52:39.575797 systemd-logind[1551]: New session 2 of user core. Apr 21 03:52:39.591634 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 03:52:39.698213 sshd[1706]: Connection closed by 10.0.0.1 port 39702 Apr 21 03:52:39.699475 sshd-session[1703]: pam_unix(sshd:session): session closed for user core Apr 21 03:52:39.710735 systemd[1]: sshd@1-10.0.0.123:22-10.0.0.1:39702.service: Deactivated successfully. Apr 21 03:52:39.712592 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 03:52:39.715924 systemd-logind[1551]: Session 2 logged out. Waiting for processes to exit. Apr 21 03:52:39.719439 systemd[1]: Started sshd@2-10.0.0.123:22-10.0.0.1:39710.service - OpenSSH per-connection server daemon (10.0.0.1:39710). Apr 21 03:52:39.720063 systemd-logind[1551]: Removed session 2. Apr 21 03:52:39.899891 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 39710 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:52:39.903444 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:52:39.929257 systemd-logind[1551]: New session 3 of user core. Apr 21 03:52:39.938424 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 03:52:39.958874 sshd[1715]: Connection closed by 10.0.0.1 port 39710 Apr 21 03:52:39.959598 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Apr 21 03:52:40.037065 systemd[1]: sshd@2-10.0.0.123:22-10.0.0.1:39710.service: Deactivated successfully. Apr 21 03:52:40.038957 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 03:52:40.041715 systemd-logind[1551]: Session 3 logged out. Waiting for processes to exit. Apr 21 03:52:40.050050 systemd[1]: Started sshd@3-10.0.0.123:22-10.0.0.1:39724.service - OpenSSH per-connection server daemon (10.0.0.1:39724). Apr 21 03:52:40.058623 systemd-logind[1551]: Removed session 3. Apr 21 03:52:40.294550 sshd[1721]: Accepted publickey for core from 10.0.0.1 port 39724 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:52:40.304571 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:52:40.406669 systemd-logind[1551]: New session 4 of user core. Apr 21 03:52:40.417475 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 03:52:40.495001 sshd[1724]: Connection closed by 10.0.0.1 port 39724 Apr 21 03:52:40.495802 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Apr 21 03:52:40.510612 systemd[1]: sshd@3-10.0.0.123:22-10.0.0.1:39724.service: Deactivated successfully. Apr 21 03:52:40.520822 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 03:52:40.566427 systemd-logind[1551]: Session 4 logged out. Waiting for processes to exit. Apr 21 03:52:40.586402 systemd[1]: Started sshd@4-10.0.0.123:22-10.0.0.1:39740.service - OpenSSH per-connection server daemon (10.0.0.1:39740). Apr 21 03:52:40.589943 systemd-logind[1551]: Removed session 4. Apr 21 03:52:40.738627 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 39740 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:52:40.740173 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:52:40.748458 systemd-logind[1551]: New session 5 of user core. Apr 21 03:52:40.766487 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 03:52:40.880978 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 03:52:40.883960 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 03:52:40.903578 sudo[1734]: pam_unix(sudo:session): session closed for user root Apr 21 03:52:40.911010 sshd[1733]: Connection closed by 10.0.0.1 port 39740 Apr 21 03:52:40.912445 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Apr 21 03:52:40.967857 systemd[1]: sshd@4-10.0.0.123:22-10.0.0.1:39740.service: Deactivated successfully. Apr 21 03:52:40.975043 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 03:52:40.978985 systemd-logind[1551]: Session 5 logged out. Waiting for processes to exit. Apr 21 03:52:40.988061 systemd[1]: Started sshd@5-10.0.0.123:22-10.0.0.1:39752.service - OpenSSH per-connection server daemon (10.0.0.1:39752). Apr 21 03:52:40.988621 systemd-logind[1551]: Removed session 5. Apr 21 03:52:41.156631 sshd[1740]: Accepted publickey for core from 10.0.0.1 port 39752 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:52:41.163059 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:52:41.223646 systemd-logind[1551]: New session 6 of user core. Apr 21 03:52:41.246543 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 03:52:41.364752 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 03:52:41.365203 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 03:52:41.440869 sudo[1745]: pam_unix(sudo:session): session closed for user root Apr 21 03:52:41.455279 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 21 03:52:41.455567 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 03:52:41.580051 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 21 03:52:41.914112 augenrules[1767]: No rules Apr 21 03:52:41.925573 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 03:52:41.946707 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 21 03:52:41.952049 sudo[1744]: pam_unix(sudo:session): session closed for user root Apr 21 03:52:41.956958 sshd[1743]: Connection closed by 10.0.0.1 port 39752 Apr 21 03:52:41.958854 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Apr 21 03:52:42.019953 systemd[1]: sshd@5-10.0.0.123:22-10.0.0.1:39752.service: Deactivated successfully. Apr 21 03:52:42.062922 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 03:52:42.102898 systemd-logind[1551]: Session 6 logged out. Waiting for processes to exit. Apr 21 03:52:42.114938 systemd[1]: Started sshd@6-10.0.0.123:22-10.0.0.1:39756.service - OpenSSH per-connection server daemon (10.0.0.1:39756). Apr 21 03:52:42.118679 systemd-logind[1551]: Removed session 6. Apr 21 03:52:42.440679 sshd[1776]: Accepted publickey for core from 10.0.0.1 port 39756 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:52:42.447891 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:52:42.479270 systemd-logind[1551]: New session 7 of user core. Apr 21 03:52:42.492833 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 03:52:42.617087 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 03:52:42.617565 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 03:52:45.205897 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 03:52:45.279264 (dockerd)[1800]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 03:52:46.417288 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 03:52:46.603578 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 03:52:47.116788 dockerd[1800]: time="2026-04-21T03:52:47.116421766Z" level=info msg="Starting up" Apr 21 03:52:47.121445 dockerd[1800]: time="2026-04-21T03:52:47.120017005Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Apr 21 03:52:47.176709 dockerd[1800]: time="2026-04-21T03:52:47.176520331Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Apr 21 03:52:47.190751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 03:52:47.229645 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 03:52:47.354297 dockerd[1800]: time="2026-04-21T03:52:47.352934092Z" level=info msg="Loading containers: start." Apr 21 03:52:47.454240 kernel: Initializing XFRM netlink socket Apr 21 03:52:47.461256 kubelet[1833]: E0421 03:52:47.461113 1833 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 03:52:47.466086 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 03:52:47.466343 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 03:52:47.466826 systemd[1]: kubelet.service: Consumed 625ms CPU time, 111.1M memory peak. Apr 21 03:52:48.802592 systemd-networkd[1475]: docker0: Link UP Apr 21 03:52:48.836710 dockerd[1800]: time="2026-04-21T03:52:48.835705964Z" level=info msg="Loading containers: done." Apr 21 03:52:48.938932 dockerd[1800]: time="2026-04-21T03:52:48.938685704Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 03:52:48.939546 dockerd[1800]: time="2026-04-21T03:52:48.939519696Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Apr 21 03:52:48.939909 dockerd[1800]: time="2026-04-21T03:52:48.939837917Z" level=info msg="Initializing buildkit" Apr 21 03:52:49.018194 dockerd[1800]: time="2026-04-21T03:52:49.017737089Z" level=info msg="Completed buildkit initialization" Apr 21 03:52:49.114571 dockerd[1800]: time="2026-04-21T03:52:49.113509596Z" level=info msg="Daemon has completed initialization" Apr 21 03:52:49.115518 dockerd[1800]: time="2026-04-21T03:52:49.114549954Z" level=info msg="API listen on /run/docker.sock" Apr 21 03:52:49.116322 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 03:52:51.255589 containerd[1583]: time="2026-04-21T03:52:51.254952169Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 21 03:52:53.052427 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2857445104.mount: Deactivated successfully. Apr 21 03:52:57.701606 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 21 03:52:57.720557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 03:52:59.099638 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 03:52:59.209580 (kubelet)[2103]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 03:52:59.600880 kubelet[2103]: E0421 03:52:59.600328 2103 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 03:52:59.610091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 03:52:59.622913 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 03:52:59.625929 systemd[1]: kubelet.service: Consumed 1.149s CPU time, 108.7M memory peak. Apr 21 03:53:00.459704 containerd[1583]: time="2026-04-21T03:53:00.459393081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:00.459704 containerd[1583]: time="2026-04-21T03:53:00.459704188Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27578861" Apr 21 03:53:00.463124 containerd[1583]: time="2026-04-21T03:53:00.462693078Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:00.471191 containerd[1583]: time="2026-04-21T03:53:00.470696676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:00.477062 containerd[1583]: time="2026-04-21T03:53:00.476716199Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 9.221156863s" Apr 21 03:53:00.477062 containerd[1583]: time="2026-04-21T03:53:00.476931922Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 21 03:53:00.508679 containerd[1583]: time="2026-04-21T03:53:00.507895724Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 21 03:53:04.355466 containerd[1583]: time="2026-04-21T03:53:04.354744654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:04.359759 containerd[1583]: time="2026-04-21T03:53:04.357426637Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451591" Apr 21 03:53:04.362840 containerd[1583]: time="2026-04-21T03:53:04.362638866Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:04.379484 containerd[1583]: time="2026-04-21T03:53:04.378945527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:04.382853 containerd[1583]: time="2026-04-21T03:53:04.382596803Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 3.873981273s" Apr 21 03:53:04.382853 containerd[1583]: time="2026-04-21T03:53:04.382792084Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 21 03:53:04.386548 containerd[1583]: time="2026-04-21T03:53:04.386208093Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 21 03:53:08.757427 containerd[1583]: time="2026-04-21T03:53:08.756746431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:08.762481 containerd[1583]: time="2026-04-21T03:53:08.761675769Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555222" Apr 21 03:53:08.807394 containerd[1583]: time="2026-04-21T03:53:08.806860266Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:08.829323 containerd[1583]: time="2026-04-21T03:53:08.828884170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:08.834870 containerd[1583]: time="2026-04-21T03:53:08.834668779Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 4.44807229s" Apr 21 03:53:08.834870 containerd[1583]: time="2026-04-21T03:53:08.834806309Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 21 03:53:08.839579 containerd[1583]: time="2026-04-21T03:53:08.838732746Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 21 03:53:09.666541 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 21 03:53:09.707585 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 03:53:11.230282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 03:53:11.326128 (kubelet)[2131]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 03:53:11.757327 kubelet[2131]: E0421 03:53:11.756691 2131 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 03:53:11.770925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 03:53:11.772472 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 03:53:11.787416 systemd[1]: kubelet.service: Consumed 1.244s CPU time, 110.8M memory peak. Apr 21 03:53:14.809987 update_engine[1556]: I20260421 03:53:14.808604 1556 update_attempter.cc:509] Updating boot flags... Apr 21 03:53:14.895836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1838744403.mount: Deactivated successfully. Apr 21 03:53:19.104090 containerd[1583]: time="2026-04-21T03:53:19.102768772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:19.111828 containerd[1583]: time="2026-04-21T03:53:19.105125400Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699819" Apr 21 03:53:19.113505 containerd[1583]: time="2026-04-21T03:53:19.113256992Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:19.158960 containerd[1583]: time="2026-04-21T03:53:19.157881772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:19.179871 containerd[1583]: time="2026-04-21T03:53:19.173607719Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 10.334114997s" Apr 21 03:53:19.202855 containerd[1583]: time="2026-04-21T03:53:19.201707523Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 21 03:53:19.209798 containerd[1583]: time="2026-04-21T03:53:19.209543006Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 21 03:53:20.419805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3081869176.mount: Deactivated successfully. Apr 21 03:53:22.058990 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Apr 21 03:53:22.120076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 03:53:23.071836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 03:53:23.124504 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 03:53:23.577636 kubelet[2192]: E0421 03:53:23.577288 2192 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 03:53:23.585079 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 03:53:23.587470 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 03:53:23.591127 systemd[1]: kubelet.service: Consumed 978ms CPU time, 110.7M memory peak. Apr 21 03:53:24.872980 containerd[1583]: time="2026-04-21T03:53:24.872459456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:24.874786 containerd[1583]: time="2026-04-21T03:53:24.873847387Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 21 03:53:24.879096 containerd[1583]: time="2026-04-21T03:53:24.876983179Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:24.924544 containerd[1583]: time="2026-04-21T03:53:24.924314149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:24.929123 containerd[1583]: time="2026-04-21T03:53:24.928625219Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 5.718893816s" Apr 21 03:53:24.929123 containerd[1583]: time="2026-04-21T03:53:24.929042290Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 21 03:53:24.931868 containerd[1583]: time="2026-04-21T03:53:24.931690120Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 21 03:53:25.599443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3703831847.mount: Deactivated successfully. Apr 21 03:53:25.610951 containerd[1583]: time="2026-04-21T03:53:25.610680194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:25.611340 containerd[1583]: time="2026-04-21T03:53:25.611291528Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 21 03:53:25.614056 containerd[1583]: time="2026-04-21T03:53:25.613779634Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:25.616138 containerd[1583]: time="2026-04-21T03:53:25.615963464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:25.617414 containerd[1583]: time="2026-04-21T03:53:25.617320897Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 685.417602ms" Apr 21 03:53:25.617528 containerd[1583]: time="2026-04-21T03:53:25.617422199Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 21 03:53:25.618493 containerd[1583]: time="2026-04-21T03:53:25.618363896Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 21 03:53:26.383354 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2662551844.mount: Deactivated successfully. Apr 21 03:53:28.165311 containerd[1583]: time="2026-04-21T03:53:28.164915024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:28.190283 containerd[1583]: time="2026-04-21T03:53:28.167474415Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643979" Apr 21 03:53:28.190873 containerd[1583]: time="2026-04-21T03:53:28.190742509Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:28.225276 containerd[1583]: time="2026-04-21T03:53:28.224932872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:28.229872 containerd[1583]: time="2026-04-21T03:53:28.229464161Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 2.610963967s" Apr 21 03:53:28.229872 containerd[1583]: time="2026-04-21T03:53:28.229812090Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 21 03:53:29.824076 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 03:53:29.824401 systemd[1]: kubelet.service: Consumed 978ms CPU time, 110.7M memory peak. Apr 21 03:53:29.831237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 03:53:29.869376 systemd[1]: Reload requested from client PID 2330 ('systemctl') (unit session-7.scope)... Apr 21 03:53:29.869398 systemd[1]: Reloading... Apr 21 03:53:30.067306 zram_generator::config[2370]: No configuration found. Apr 21 03:53:30.514276 systemd[1]: Reloading finished in 644 ms. Apr 21 03:53:30.608928 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 03:53:30.609144 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 03:53:30.609744 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 03:53:30.609815 systemd[1]: kubelet.service: Consumed 175ms CPU time, 98.5M memory peak. Apr 21 03:53:30.614706 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 03:53:31.037552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 03:53:31.123411 (kubelet)[2421]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 03:53:31.198633 kubelet[2421]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 03:53:31.290544 kubelet[2421]: I0421 03:53:31.289828 2421 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 03:53:31.290544 kubelet[2421]: I0421 03:53:31.290001 2421 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 03:53:31.290544 kubelet[2421]: I0421 03:53:31.290108 2421 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 03:53:31.290544 kubelet[2421]: I0421 03:53:31.290114 2421 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 03:53:31.290544 kubelet[2421]: I0421 03:53:31.290488 2421 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 03:53:31.346632 kubelet[2421]: I0421 03:53:31.346230 2421 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 03:53:31.347303 kubelet[2421]: E0421 03:53:31.346918 2421 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.123:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.123:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 03:53:31.350853 kubelet[2421]: I0421 03:53:31.350813 2421 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 21 03:53:31.357341 kubelet[2421]: I0421 03:53:31.357065 2421 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 03:53:31.360350 kubelet[2421]: I0421 03:53:31.360042 2421 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 03:53:31.360703 kubelet[2421]: I0421 03:53:31.360277 2421 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 03:53:31.360703 kubelet[2421]: I0421 03:53:31.360518 2421 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 03:53:31.360703 kubelet[2421]: I0421 03:53:31.360527 2421 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 03:53:31.361063 kubelet[2421]: I0421 03:53:31.360907 2421 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 03:53:31.363985 kubelet[2421]: I0421 03:53:31.363852 2421 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 03:53:31.364427 kubelet[2421]: I0421 03:53:31.364410 2421 kubelet.go:482] "Attempting to sync node with API server" Apr 21 03:53:31.364460 kubelet[2421]: I0421 03:53:31.364430 2421 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 03:53:31.364477 kubelet[2421]: I0421 03:53:31.364466 2421 kubelet.go:394] "Adding apiserver pod source" Apr 21 03:53:31.364516 kubelet[2421]: I0421 03:53:31.364481 2421 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 03:53:31.369982 kubelet[2421]: I0421 03:53:31.368787 2421 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 21 03:53:31.372852 kubelet[2421]: I0421 03:53:31.372731 2421 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 03:53:31.373294 kubelet[2421]: I0421 03:53:31.373248 2421 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 03:53:31.373422 kubelet[2421]: W0421 03:53:31.373404 2421 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 03:53:31.376248 kubelet[2421]: I0421 03:53:31.376209 2421 server.go:1257] "Started kubelet" Apr 21 03:53:31.376797 kubelet[2421]: I0421 03:53:31.376524 2421 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 03:53:31.376797 kubelet[2421]: I0421 03:53:31.376587 2421 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 03:53:31.376797 kubelet[2421]: I0421 03:53:31.376633 2421 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 03:53:31.376902 kubelet[2421]: I0421 03:53:31.376867 2421 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 03:53:31.377500 kubelet[2421]: I0421 03:53:31.377404 2421 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 03:53:31.378497 kubelet[2421]: I0421 03:53:31.378421 2421 server.go:317] "Adding debug handlers to kubelet server" Apr 21 03:53:31.379119 kubelet[2421]: I0421 03:53:31.379071 2421 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 03:53:31.380188 kubelet[2421]: E0421 03:53:31.380053 2421 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 03:53:31.380188 kubelet[2421]: I0421 03:53:31.380099 2421 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 03:53:31.380300 kubelet[2421]: I0421 03:53:31.380262 2421 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 03:53:31.381271 kubelet[2421]: I0421 03:53:31.380344 2421 reconciler.go:29] "Reconciler: start to sync state" Apr 21 03:53:31.381766 kubelet[2421]: I0421 03:53:31.381630 2421 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 03:53:31.383303 kubelet[2421]: E0421 03:53:31.380820 2421 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.123:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.123:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a842e6a5059c1d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-21 03:53:31.376180253 +0000 UTC m=+0.246299093,LastTimestamp:2026-04-21 03:53:31.376180253 +0000 UTC m=+0.246299093,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 21 03:53:31.383303 kubelet[2421]: E0421 03:53:31.383268 2421 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="200ms" Apr 21 03:53:31.383781 kubelet[2421]: E0421 03:53:31.383360 2421 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 03:53:31.383781 kubelet[2421]: I0421 03:53:31.383412 2421 factory.go:223] Registration of the containerd container factory successfully Apr 21 03:53:31.383781 kubelet[2421]: I0421 03:53:31.383419 2421 factory.go:223] Registration of the systemd container factory successfully Apr 21 03:53:31.424320 kubelet[2421]: I0421 03:53:31.423879 2421 cpu_manager.go:225] "Starting" policy="none" Apr 21 03:53:31.424320 kubelet[2421]: I0421 03:53:31.423898 2421 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 03:53:31.424320 kubelet[2421]: I0421 03:53:31.423911 2421 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 03:53:31.428335 kubelet[2421]: I0421 03:53:31.427359 2421 policy_none.go:50] "Start" Apr 21 03:53:31.428335 kubelet[2421]: I0421 03:53:31.427616 2421 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 03:53:31.428335 kubelet[2421]: I0421 03:53:31.427663 2421 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 03:53:31.431836 kubelet[2421]: I0421 03:53:31.431337 2421 policy_none.go:44] "Start" Apr 21 03:53:31.436882 kubelet[2421]: I0421 03:53:31.436778 2421 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 03:53:31.439109 kubelet[2421]: I0421 03:53:31.439019 2421 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 03:53:31.439109 kubelet[2421]: I0421 03:53:31.439094 2421 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 03:53:31.439331 kubelet[2421]: I0421 03:53:31.439135 2421 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 03:53:31.439736 kubelet[2421]: E0421 03:53:31.439267 2421 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 03:53:31.440242 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 03:53:31.457358 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 03:53:31.463836 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 03:53:31.476385 kubelet[2421]: E0421 03:53:31.476285 2421 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 03:53:31.476878 kubelet[2421]: I0421 03:53:31.476833 2421 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 03:53:31.476915 kubelet[2421]: I0421 03:53:31.476868 2421 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 03:53:31.477331 kubelet[2421]: I0421 03:53:31.477314 2421 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 03:53:31.479180 kubelet[2421]: E0421 03:53:31.479096 2421 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 03:53:31.479180 kubelet[2421]: E0421 03:53:31.479139 2421 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 21 03:53:31.570687 systemd[1]: Created slice kubepods-burstable-pod32500792b402c45096b7370a89e36ec8.slice - libcontainer container kubepods-burstable-pod32500792b402c45096b7370a89e36ec8.slice. Apr 21 03:53:31.581058 kubelet[2421]: E0421 03:53:31.580933 2421 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 03:53:31.581468 kubelet[2421]: I0421 03:53:31.581445 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 03:53:31.581496 kubelet[2421]: I0421 03:53:31.581478 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 03:53:31.581517 kubelet[2421]: I0421 03:53:31.581508 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 21 03:53:31.581517 kubelet[2421]: I0421 03:53:31.581522 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32500792b402c45096b7370a89e36ec8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"32500792b402c45096b7370a89e36ec8\") " pod="kube-system/kube-apiserver-localhost" Apr 21 03:53:31.581517 kubelet[2421]: I0421 03:53:31.581543 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 03:53:31.581517 kubelet[2421]: I0421 03:53:31.581554 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 03:53:31.581517 kubelet[2421]: I0421 03:53:31.581566 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 03:53:31.581723 kubelet[2421]: I0421 03:53:31.581586 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32500792b402c45096b7370a89e36ec8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"32500792b402c45096b7370a89e36ec8\") " pod="kube-system/kube-apiserver-localhost" Apr 21 03:53:31.581723 kubelet[2421]: I0421 03:53:31.581599 2421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32500792b402c45096b7370a89e36ec8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"32500792b402c45096b7370a89e36ec8\") " pod="kube-system/kube-apiserver-localhost" Apr 21 03:53:31.582107 kubelet[2421]: I0421 03:53:31.581966 2421 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 03:53:31.583675 kubelet[2421]: E0421 03:53:31.583501 2421 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Apr 21 03:53:31.584272 kubelet[2421]: E0421 03:53:31.584247 2421 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="400ms" Apr 21 03:53:31.586108 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 21 03:53:31.600301 kubelet[2421]: E0421 03:53:31.600248 2421 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 03:53:31.603466 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 21 03:53:31.607532 kubelet[2421]: E0421 03:53:31.607378 2421 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 03:53:31.787796 kubelet[2421]: I0421 03:53:31.787500 2421 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 03:53:31.788891 kubelet[2421]: E0421 03:53:31.788827 2421 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Apr 21 03:53:31.886268 kubelet[2421]: E0421 03:53:31.885671 2421 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:31.889269 containerd[1583]: time="2026-04-21T03:53:31.888962498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:32500792b402c45096b7370a89e36ec8,Namespace:kube-system,Attempt:0,}" Apr 21 03:53:31.904695 kubelet[2421]: E0421 03:53:31.904452 2421 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:31.906271 containerd[1583]: time="2026-04-21T03:53:31.905963929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,}" Apr 21 03:53:31.911620 kubelet[2421]: E0421 03:53:31.911487 2421 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:31.912972 containerd[1583]: time="2026-04-21T03:53:31.912932320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,}" Apr 21 03:53:32.029321 kubelet[2421]: E0421 03:53:32.028982 2421 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="800ms" Apr 21 03:53:32.191635 kubelet[2421]: I0421 03:53:32.191523 2421 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 03:53:32.192362 kubelet[2421]: E0421 03:53:32.192326 2421 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Apr 21 03:53:32.399483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount45734660.mount: Deactivated successfully. Apr 21 03:53:32.411929 containerd[1583]: time="2026-04-21T03:53:32.411521549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 03:53:32.415460 containerd[1583]: time="2026-04-21T03:53:32.415240769Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 21 03:53:32.416923 containerd[1583]: time="2026-04-21T03:53:32.416569862Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 03:53:32.419031 containerd[1583]: time="2026-04-21T03:53:32.418826041Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 03:53:32.419987 containerd[1583]: time="2026-04-21T03:53:32.419921419Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 03:53:32.420608 containerd[1583]: time="2026-04-21T03:53:32.420577631Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 21 03:53:32.421348 containerd[1583]: time="2026-04-21T03:53:32.421323827Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Apr 21 03:53:32.422346 containerd[1583]: time="2026-04-21T03:53:32.422295345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 03:53:32.425460 containerd[1583]: time="2026-04-21T03:53:32.425294317Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 531.496695ms" Apr 21 03:53:32.426529 containerd[1583]: time="2026-04-21T03:53:32.426382251Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 517.00111ms" Apr 21 03:53:32.428547 containerd[1583]: time="2026-04-21T03:53:32.428297647Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 513.495762ms" Apr 21 03:53:32.482881 containerd[1583]: time="2026-04-21T03:53:32.481672773Z" level=info msg="connecting to shim 591863c7e789817586b596e8aca04e643ebdc2c9d5a4ca2f3f5922cb6a166aee" address="unix:///run/containerd/s/ce3e936ee690e14b46bacf9f0f12065510932cc055fa01b3ca852437d886f725" namespace=k8s.io protocol=ttrpc version=3 Apr 21 03:53:32.492651 containerd[1583]: time="2026-04-21T03:53:32.492383064Z" level=info msg="connecting to shim 7375c286ccb78ee0c4592431220f183f2886ecb859805a9034ae008d0957020f" address="unix:///run/containerd/s/708c48edf141efbeb14682331ad5c0995415c1931d027851ab9591a9e0366afc" namespace=k8s.io protocol=ttrpc version=3 Apr 21 03:53:32.492935 containerd[1583]: time="2026-04-21T03:53:32.492440444Z" level=info msg="connecting to shim aa5f998c782d017abcdc581a8a2dad0fc83be04f7f4b82e43764af09d938a16b" address="unix:///run/containerd/s/44a5ac5f44d4be64103838e42cfbf0f1896bc6b34c55fd4c4af7ace1a3baffa7" namespace=k8s.io protocol=ttrpc version=3 Apr 21 03:53:32.532706 systemd[1]: Started cri-containerd-591863c7e789817586b596e8aca04e643ebdc2c9d5a4ca2f3f5922cb6a166aee.scope - libcontainer container 591863c7e789817586b596e8aca04e643ebdc2c9d5a4ca2f3f5922cb6a166aee. Apr 21 03:53:32.559946 systemd[1]: Started cri-containerd-aa5f998c782d017abcdc581a8a2dad0fc83be04f7f4b82e43764af09d938a16b.scope - libcontainer container aa5f998c782d017abcdc581a8a2dad0fc83be04f7f4b82e43764af09d938a16b. Apr 21 03:53:32.565822 systemd[1]: Started cri-containerd-7375c286ccb78ee0c4592431220f183f2886ecb859805a9034ae008d0957020f.scope - libcontainer container 7375c286ccb78ee0c4592431220f183f2886ecb859805a9034ae008d0957020f. Apr 21 03:53:32.694227 containerd[1583]: time="2026-04-21T03:53:32.693737266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:32500792b402c45096b7370a89e36ec8,Namespace:kube-system,Attempt:0,} returns sandbox id \"591863c7e789817586b596e8aca04e643ebdc2c9d5a4ca2f3f5922cb6a166aee\"" Apr 21 03:53:32.699656 kubelet[2421]: E0421 03:53:32.699062 2421 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:32.705431 containerd[1583]: time="2026-04-21T03:53:32.705189177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa5f998c782d017abcdc581a8a2dad0fc83be04f7f4b82e43764af09d938a16b\"" Apr 21 03:53:32.708890 containerd[1583]: time="2026-04-21T03:53:32.708698738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"7375c286ccb78ee0c4592431220f183f2886ecb859805a9034ae008d0957020f\"" Apr 21 03:53:32.709368 kubelet[2421]: E0421 03:53:32.708743 2421 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:32.709592 kubelet[2421]: E0421 03:53:32.709568 2421 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:32.712543 containerd[1583]: time="2026-04-21T03:53:32.712490754Z" level=info msg="CreateContainer within sandbox \"591863c7e789817586b596e8aca04e643ebdc2c9d5a4ca2f3f5922cb6a166aee\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 03:53:32.715743 containerd[1583]: time="2026-04-21T03:53:32.715591498Z" level=info msg="CreateContainer within sandbox \"aa5f998c782d017abcdc581a8a2dad0fc83be04f7f4b82e43764af09d938a16b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 03:53:32.718068 containerd[1583]: time="2026-04-21T03:53:32.717998168Z" level=info msg="CreateContainer within sandbox \"7375c286ccb78ee0c4592431220f183f2886ecb859805a9034ae008d0957020f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 03:53:32.724837 containerd[1583]: time="2026-04-21T03:53:32.724587615Z" level=info msg="Container 6632c789ea4ca9cc6da12d911545a71debf6fb8fb3f19ba116a705ef688a2295: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:53:32.736216 containerd[1583]: time="2026-04-21T03:53:32.735745448Z" level=info msg="Container dc985a6e18019be26a7eda3644af15ee898597ca5d9e762c798b3be912c8b7c3: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:53:32.739218 containerd[1583]: time="2026-04-21T03:53:32.738926938Z" level=info msg="Container 0e39e70b02d8b313c3180d6408da7082e5e7b9a3bb2b0ad8b629ae80fd9774be: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:53:32.744344 containerd[1583]: time="2026-04-21T03:53:32.744017811Z" level=info msg="CreateContainer within sandbox \"591863c7e789817586b596e8aca04e643ebdc2c9d5a4ca2f3f5922cb6a166aee\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6632c789ea4ca9cc6da12d911545a71debf6fb8fb3f19ba116a705ef688a2295\"" Apr 21 03:53:32.749257 containerd[1583]: time="2026-04-21T03:53:32.747374033Z" level=info msg="StartContainer for \"6632c789ea4ca9cc6da12d911545a71debf6fb8fb3f19ba116a705ef688a2295\"" Apr 21 03:53:32.753292 containerd[1583]: time="2026-04-21T03:53:32.752835133Z" level=info msg="connecting to shim 6632c789ea4ca9cc6da12d911545a71debf6fb8fb3f19ba116a705ef688a2295" address="unix:///run/containerd/s/ce3e936ee690e14b46bacf9f0f12065510932cc055fa01b3ca852437d886f725" protocol=ttrpc version=3 Apr 21 03:53:32.758263 containerd[1583]: time="2026-04-21T03:53:32.758199793Z" level=info msg="CreateContainer within sandbox \"7375c286ccb78ee0c4592431220f183f2886ecb859805a9034ae008d0957020f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0e39e70b02d8b313c3180d6408da7082e5e7b9a3bb2b0ad8b629ae80fd9774be\"" Apr 21 03:53:32.759297 containerd[1583]: time="2026-04-21T03:53:32.759264458Z" level=info msg="StartContainer for \"0e39e70b02d8b313c3180d6408da7082e5e7b9a3bb2b0ad8b629ae80fd9774be\"" Apr 21 03:53:32.760207 containerd[1583]: time="2026-04-21T03:53:32.760064883Z" level=info msg="CreateContainer within sandbox \"aa5f998c782d017abcdc581a8a2dad0fc83be04f7f4b82e43764af09d938a16b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dc985a6e18019be26a7eda3644af15ee898597ca5d9e762c798b3be912c8b7c3\"" Apr 21 03:53:32.760595 containerd[1583]: time="2026-04-21T03:53:32.760567502Z" level=info msg="StartContainer for \"dc985a6e18019be26a7eda3644af15ee898597ca5d9e762c798b3be912c8b7c3\"" Apr 21 03:53:32.760930 containerd[1583]: time="2026-04-21T03:53:32.760777769Z" level=info msg="connecting to shim 0e39e70b02d8b313c3180d6408da7082e5e7b9a3bb2b0ad8b629ae80fd9774be" address="unix:///run/containerd/s/708c48edf141efbeb14682331ad5c0995415c1931d027851ab9591a9e0366afc" protocol=ttrpc version=3 Apr 21 03:53:32.761776 containerd[1583]: time="2026-04-21T03:53:32.761742181Z" level=info msg="connecting to shim dc985a6e18019be26a7eda3644af15ee898597ca5d9e762c798b3be912c8b7c3" address="unix:///run/containerd/s/44a5ac5f44d4be64103838e42cfbf0f1896bc6b34c55fd4c4af7ace1a3baffa7" protocol=ttrpc version=3 Apr 21 03:53:32.787253 systemd[1]: Started cri-containerd-6632c789ea4ca9cc6da12d911545a71debf6fb8fb3f19ba116a705ef688a2295.scope - libcontainer container 6632c789ea4ca9cc6da12d911545a71debf6fb8fb3f19ba116a705ef688a2295. Apr 21 03:53:32.799389 systemd[1]: Started cri-containerd-dc985a6e18019be26a7eda3644af15ee898597ca5d9e762c798b3be912c8b7c3.scope - libcontainer container dc985a6e18019be26a7eda3644af15ee898597ca5d9e762c798b3be912c8b7c3. Apr 21 03:53:32.819048 systemd[1]: Started cri-containerd-0e39e70b02d8b313c3180d6408da7082e5e7b9a3bb2b0ad8b629ae80fd9774be.scope - libcontainer container 0e39e70b02d8b313c3180d6408da7082e5e7b9a3bb2b0ad8b629ae80fd9774be. Apr 21 03:53:32.836853 kubelet[2421]: E0421 03:53:32.835934 2421 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.123:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.123:6443: connect: connection refused" interval="1.6s" Apr 21 03:53:32.965501 containerd[1583]: time="2026-04-21T03:53:32.965308715Z" level=info msg="StartContainer for \"0e39e70b02d8b313c3180d6408da7082e5e7b9a3bb2b0ad8b629ae80fd9774be\" returns successfully" Apr 21 03:53:32.966890 containerd[1583]: time="2026-04-21T03:53:32.966865877Z" level=info msg="StartContainer for \"6632c789ea4ca9cc6da12d911545a71debf6fb8fb3f19ba116a705ef688a2295\" returns successfully" Apr 21 03:53:32.978797 containerd[1583]: time="2026-04-21T03:53:32.978604882Z" level=info msg="StartContainer for \"dc985a6e18019be26a7eda3644af15ee898597ca5d9e762c798b3be912c8b7c3\" returns successfully" Apr 21 03:53:33.002695 kubelet[2421]: I0421 03:53:33.001978 2421 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 03:53:33.005117 kubelet[2421]: E0421 03:53:33.004627 2421 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.123:6443/api/v1/nodes\": dial tcp 10.0.0.123:6443: connect: connection refused" node="localhost" Apr 21 03:53:33.468004 kubelet[2421]: E0421 03:53:33.467917 2421 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 03:53:33.471273 kubelet[2421]: E0421 03:53:33.470473 2421 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:33.480524 kubelet[2421]: E0421 03:53:33.478117 2421 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 03:53:33.480524 kubelet[2421]: E0421 03:53:33.526755 2421 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:33.480524 kubelet[2421]: E0421 03:53:33.531304 2421 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 03:53:33.480524 kubelet[2421]: E0421 03:53:33.531732 2421 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:34.464937 kubelet[2421]: E0421 03:53:34.463766 2421 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 21 03:53:34.485053 kubelet[2421]: E0421 03:53:34.484832 2421 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 03:53:34.486916 kubelet[2421]: E0421 03:53:34.485141 2421 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:34.486916 kubelet[2421]: E0421 03:53:34.486648 2421 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 03:53:34.487038 kubelet[2421]: E0421 03:53:34.486939 2421 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:34.572345 kubelet[2421]: E0421 03:53:34.572236 2421 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 21 03:53:34.572821 kubelet[2421]: E0421 03:53:34.572584 2421 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:34.616455 kubelet[2421]: I0421 03:53:34.616269 2421 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 03:53:34.646892 kubelet[2421]: I0421 03:53:34.646702 2421 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 21 03:53:34.646892 kubelet[2421]: E0421 03:53:34.646856 2421 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 21 03:53:34.680027 kubelet[2421]: E0421 03:53:34.679932 2421 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 03:53:34.782115 kubelet[2421]: E0421 03:53:34.781408 2421 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 03:53:34.887369 kubelet[2421]: E0421 03:53:34.886698 2421 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 03:53:34.988573 kubelet[2421]: E0421 03:53:34.988293 2421 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 03:53:35.089798 kubelet[2421]: E0421 03:53:35.088955 2421 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 03:53:35.240795 kubelet[2421]: E0421 03:53:35.240278 2421 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 03:53:35.364631 kubelet[2421]: E0421 03:53:35.345211 2421 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 03:53:35.457370 kubelet[2421]: E0421 03:53:35.456975 2421 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 03:53:35.560601 kubelet[2421]: E0421 03:53:35.560069 2421 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 03:53:35.723470 kubelet[2421]: E0421 03:53:35.680659 2421 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 21 03:53:35.790780 kubelet[2421]: I0421 03:53:35.783094 2421 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 03:53:35.900713 kubelet[2421]: I0421 03:53:35.900521 2421 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 03:53:35.930688 kubelet[2421]: I0421 03:53:35.929857 2421 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 03:53:36.369126 kubelet[2421]: I0421 03:53:36.368771 2421 apiserver.go:52] "Watching apiserver" Apr 21 03:53:36.387521 kubelet[2421]: E0421 03:53:36.387341 2421 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:36.388313 kubelet[2421]: E0421 03:53:36.387395 2421 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:36.388313 kubelet[2421]: E0421 03:53:36.387403 2421 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:36.509253 kubelet[2421]: I0421 03:53:36.508799 2421 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 03:53:38.130119 kubelet[2421]: E0421 03:53:38.129943 2421 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:41.528415 kubelet[2421]: I0421 03:53:41.526976 2421 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=6.526826966 podStartE2EDuration="6.526826966s" podCreationTimestamp="2026-04-21 03:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 03:53:41.52454736 +0000 UTC m=+10.394666236" watchObservedRunningTime="2026-04-21 03:53:41.526826966 +0000 UTC m=+10.396945798" Apr 21 03:53:41.766085 kubelet[2421]: I0421 03:53:41.765293 2421 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.765217856 podStartE2EDuration="6.765217856s" podCreationTimestamp="2026-04-21 03:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 03:53:41.640440641 +0000 UTC m=+10.510559533" watchObservedRunningTime="2026-04-21 03:53:41.765217856 +0000 UTC m=+10.635336696" Apr 21 03:53:42.380048 kubelet[2421]: E0421 03:53:42.379774 2421 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:42.413575 kubelet[2421]: I0421 03:53:42.413358 2421 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.413331081 podStartE2EDuration="7.413331081s" podCreationTimestamp="2026-04-21 03:53:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 03:53:41.768128052 +0000 UTC m=+10.638246908" watchObservedRunningTime="2026-04-21 03:53:42.413331081 +0000 UTC m=+11.283449914" Apr 21 03:53:42.744520 kubelet[2421]: E0421 03:53:42.744464 2421 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:43.134588 systemd[1]: Reload requested from client PID 2715 ('systemctl') (unit session-7.scope)... Apr 21 03:53:43.134989 systemd[1]: Reloading... Apr 21 03:53:43.476251 zram_generator::config[2758]: No configuration found. Apr 21 03:53:44.046044 systemd[1]: Reloading finished in 909 ms. Apr 21 03:53:44.126103 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 03:53:44.148678 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 03:53:44.150947 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 03:53:44.151666 systemd[1]: kubelet.service: Consumed 3.742s CPU time, 126.4M memory peak. Apr 21 03:53:44.161192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 03:53:44.735457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 03:53:44.764530 (kubelet)[2803]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 03:53:44.859268 kubelet[2803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 03:53:44.929762 kubelet[2803]: I0421 03:53:44.928830 2803 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 21 03:53:44.929762 kubelet[2803]: I0421 03:53:44.929615 2803 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 03:53:44.929762 kubelet[2803]: I0421 03:53:44.929892 2803 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 21 03:53:44.940663 kubelet[2803]: I0421 03:53:44.930065 2803 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 03:53:44.940663 kubelet[2803]: I0421 03:53:44.931703 2803 server.go:951] "Client rotation is on, will bootstrap in background" Apr 21 03:53:44.940663 kubelet[2803]: I0421 03:53:44.939393 2803 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 03:53:44.954498 kubelet[2803]: I0421 03:53:44.953449 2803 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 03:53:44.971884 kubelet[2803]: I0421 03:53:44.971610 2803 server.go:1418] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Apr 21 03:53:45.007606 kubelet[2803]: I0421 03:53:45.006718 2803 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 21 03:53:45.009041 kubelet[2803]: I0421 03:53:45.007814 2803 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 03:53:45.009041 kubelet[2803]: I0421 03:53:45.007960 2803 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 03:53:45.009041 kubelet[2803]: I0421 03:53:45.008468 2803 topology_manager.go:143] "Creating topology manager with none policy" Apr 21 03:53:45.009041 kubelet[2803]: I0421 03:53:45.008633 2803 container_manager_linux.go:308] "Creating device plugin manager" Apr 21 03:53:45.009669 kubelet[2803]: I0421 03:53:45.008837 2803 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 21 03:53:45.009669 kubelet[2803]: I0421 03:53:45.009327 2803 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 21 03:53:45.010873 kubelet[2803]: I0421 03:53:45.010380 2803 kubelet.go:482] "Attempting to sync node with API server" Apr 21 03:53:45.010873 kubelet[2803]: I0421 03:53:45.010877 2803 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 03:53:45.013535 kubelet[2803]: I0421 03:53:45.011711 2803 kubelet.go:394] "Adding apiserver pod source" Apr 21 03:53:45.013535 kubelet[2803]: I0421 03:53:45.011878 2803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 03:53:45.018571 kubelet[2803]: I0421 03:53:45.018346 2803 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Apr 21 03:53:45.028099 kubelet[2803]: I0421 03:53:45.027935 2803 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 03:53:45.028099 kubelet[2803]: I0421 03:53:45.028078 2803 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 21 03:53:45.059345 kubelet[2803]: I0421 03:53:45.058401 2803 server.go:1257] "Started kubelet" Apr 21 03:53:45.060062 kubelet[2803]: I0421 03:53:45.059950 2803 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 03:53:45.060103 kubelet[2803]: I0421 03:53:45.060084 2803 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 21 03:53:45.067322 kubelet[2803]: I0421 03:53:45.066069 2803 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 03:53:45.117025 kubelet[2803]: I0421 03:53:45.067538 2803 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 03:53:45.117025 kubelet[2803]: I0421 03:53:45.070699 2803 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 21 03:53:45.121840 kubelet[2803]: I0421 03:53:45.121672 2803 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 03:53:45.125782 kubelet[2803]: I0421 03:53:45.123710 2803 server.go:317] "Adding debug handlers to kubelet server" Apr 21 03:53:45.132379 kubelet[2803]: I0421 03:53:45.132141 2803 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 21 03:53:45.133501 kubelet[2803]: I0421 03:53:45.132870 2803 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 21 03:53:45.133501 kubelet[2803]: I0421 03:53:45.133118 2803 reconciler.go:29] "Reconciler: start to sync state" Apr 21 03:53:45.133706 kubelet[2803]: I0421 03:53:45.133648 2803 factory.go:223] Registration of the containerd container factory successfully Apr 21 03:53:45.133706 kubelet[2803]: I0421 03:53:45.133670 2803 factory.go:223] Registration of the systemd container factory successfully Apr 21 03:53:45.133779 kubelet[2803]: I0421 03:53:45.133746 2803 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 03:53:45.204037 kubelet[2803]: I0421 03:53:45.203720 2803 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 21 03:53:45.214950 kubelet[2803]: I0421 03:53:45.214247 2803 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 21 03:53:45.214950 kubelet[2803]: I0421 03:53:45.214368 2803 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 21 03:53:45.214950 kubelet[2803]: I0421 03:53:45.214501 2803 kubelet.go:2501] "Starting kubelet main sync loop" Apr 21 03:53:45.214950 kubelet[2803]: E0421 03:53:45.214639 2803 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 03:53:45.327705 kubelet[2803]: E0421 03:53:45.326368 2803 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 21 03:53:45.371837 kubelet[2803]: I0421 03:53:45.371480 2803 cpu_manager.go:225] "Starting" policy="none" Apr 21 03:53:45.372331 kubelet[2803]: I0421 03:53:45.372059 2803 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 21 03:53:45.372331 kubelet[2803]: I0421 03:53:45.372130 2803 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 21 03:53:45.372521 kubelet[2803]: I0421 03:53:45.372416 2803 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 21 03:53:45.372521 kubelet[2803]: I0421 03:53:45.372427 2803 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 21 03:53:45.372521 kubelet[2803]: I0421 03:53:45.372442 2803 policy_none.go:50] "Start" Apr 21 03:53:45.372647 kubelet[2803]: I0421 03:53:45.372621 2803 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 21 03:53:45.372680 kubelet[2803]: I0421 03:53:45.372648 2803 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 21 03:53:45.372882 kubelet[2803]: I0421 03:53:45.372849 2803 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 21 03:53:45.372882 kubelet[2803]: I0421 03:53:45.372867 2803 policy_none.go:44] "Start" Apr 21 03:53:45.388876 kubelet[2803]: E0421 03:53:45.388209 2803 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 03:53:45.390087 kubelet[2803]: I0421 03:53:45.390050 2803 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 21 03:53:45.390244 kubelet[2803]: I0421 03:53:45.390104 2803 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 03:53:45.390815 kubelet[2803]: I0421 03:53:45.390780 2803 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 21 03:53:45.397904 kubelet[2803]: E0421 03:53:45.397820 2803 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 03:53:45.531519 kubelet[2803]: I0421 03:53:45.531095 2803 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 21 03:53:45.532977 kubelet[2803]: I0421 03:53:45.532871 2803 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 21 03:53:45.533569 kubelet[2803]: I0421 03:53:45.533350 2803 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 21 03:53:45.552738 kubelet[2803]: I0421 03:53:45.552442 2803 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 21 03:53:45.558965 kubelet[2803]: I0421 03:53:45.558606 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 03:53:45.558965 kubelet[2803]: I0421 03:53:45.558904 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 21 03:53:45.558965 kubelet[2803]: I0421 03:53:45.558982 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/32500792b402c45096b7370a89e36ec8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"32500792b402c45096b7370a89e36ec8\") " pod="kube-system/kube-apiserver-localhost" Apr 21 03:53:45.559586 kubelet[2803]: I0421 03:53:45.559031 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/32500792b402c45096b7370a89e36ec8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"32500792b402c45096b7370a89e36ec8\") " pod="kube-system/kube-apiserver-localhost" Apr 21 03:53:45.559586 kubelet[2803]: I0421 03:53:45.559061 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 03:53:45.559586 kubelet[2803]: I0421 03:53:45.559109 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 03:53:45.559586 kubelet[2803]: I0421 03:53:45.559131 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 03:53:45.559586 kubelet[2803]: I0421 03:53:45.559315 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 21 03:53:45.559674 kubelet[2803]: I0421 03:53:45.559358 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/32500792b402c45096b7370a89e36ec8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"32500792b402c45096b7370a89e36ec8\") " pod="kube-system/kube-apiserver-localhost" Apr 21 03:53:45.567958 kubelet[2803]: E0421 03:53:45.567702 2803 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 21 03:53:45.568440 kubelet[2803]: E0421 03:53:45.568121 2803 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 21 03:53:45.568440 kubelet[2803]: E0421 03:53:45.568268 2803 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 21 03:53:45.575117 kubelet[2803]: I0421 03:53:45.575010 2803 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 21 03:53:45.575566 kubelet[2803]: I0421 03:53:45.575222 2803 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 21 03:53:45.872025 kubelet[2803]: E0421 03:53:45.871029 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:45.872025 kubelet[2803]: E0421 03:53:45.871771 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:45.872025 kubelet[2803]: E0421 03:53:45.871820 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:46.017933 kubelet[2803]: I0421 03:53:46.017489 2803 apiserver.go:52] "Watching apiserver" Apr 21 03:53:46.134631 kubelet[2803]: I0421 03:53:46.133487 2803 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 21 03:53:46.362427 kubelet[2803]: E0421 03:53:46.362273 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:46.362427 kubelet[2803]: E0421 03:53:46.362244 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:46.362427 kubelet[2803]: E0421 03:53:46.362387 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:47.366768 kubelet[2803]: E0421 03:53:47.366678 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:47.366768 kubelet[2803]: E0421 03:53:47.366661 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:48.029353 kubelet[2803]: I0421 03:53:48.029124 2803 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 03:53:48.030435 containerd[1583]: time="2026-04-21T03:53:48.030368751Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 03:53:48.032193 kubelet[2803]: I0421 03:53:48.030778 2803 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 03:53:49.048258 systemd[1]: Created slice kubepods-besteffort-podbe76ad84_4aca_4eee_8738_91592faebc14.slice - libcontainer container kubepods-besteffort-podbe76ad84_4aca_4eee_8738_91592faebc14.slice. Apr 21 03:53:49.141459 kubelet[2803]: I0421 03:53:49.140650 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be76ad84-4aca-4eee-8738-91592faebc14-lib-modules\") pod \"kube-proxy-s4f9m\" (UID: \"be76ad84-4aca-4eee-8738-91592faebc14\") " pod="kube-system/kube-proxy-s4f9m" Apr 21 03:53:49.143585 kubelet[2803]: I0421 03:53:49.141319 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/be76ad84-4aca-4eee-8738-91592faebc14-kube-proxy\") pod \"kube-proxy-s4f9m\" (UID: \"be76ad84-4aca-4eee-8738-91592faebc14\") " pod="kube-system/kube-proxy-s4f9m" Apr 21 03:53:49.143585 kubelet[2803]: I0421 03:53:49.142119 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be76ad84-4aca-4eee-8738-91592faebc14-xtables-lock\") pod \"kube-proxy-s4f9m\" (UID: \"be76ad84-4aca-4eee-8738-91592faebc14\") " pod="kube-system/kube-proxy-s4f9m" Apr 21 03:53:49.143585 kubelet[2803]: I0421 03:53:49.142612 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnnm7\" (UniqueName: \"kubernetes.io/projected/be76ad84-4aca-4eee-8738-91592faebc14-kube-api-access-fnnm7\") pod \"kube-proxy-s4f9m\" (UID: \"be76ad84-4aca-4eee-8738-91592faebc14\") " pod="kube-system/kube-proxy-s4f9m" Apr 21 03:53:49.364512 kubelet[2803]: E0421 03:53:49.364067 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:49.365808 systemd[1]: Created slice kubepods-besteffort-pod37728bfd_5eb9_4dde_9216_6343b39cb87a.slice - libcontainer container kubepods-besteffort-pod37728bfd_5eb9_4dde_9216_6343b39cb87a.slice. Apr 21 03:53:49.368401 containerd[1583]: time="2026-04-21T03:53:49.368133673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s4f9m,Uid:be76ad84-4aca-4eee-8738-91592faebc14,Namespace:kube-system,Attempt:0,}" Apr 21 03:53:49.440230 containerd[1583]: time="2026-04-21T03:53:49.440068652Z" level=info msg="connecting to shim 6f3ccdc151b2132bdd788b3d95a0e02a7a21765cf5ec32074803350f5685b419" address="unix:///run/containerd/s/04696cf88a589f541f14882a64f58a7d31530ad6149644265b86a2109ed4e84a" namespace=k8s.io protocol=ttrpc version=3 Apr 21 03:53:49.446083 kubelet[2803]: I0421 03:53:49.445831 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/37728bfd-5eb9-4dde-9216-6343b39cb87a-var-lib-calico\") pod \"tigera-operator-687949b757-4x89k\" (UID: \"37728bfd-5eb9-4dde-9216-6343b39cb87a\") " pod="tigera-operator/tigera-operator-687949b757-4x89k" Apr 21 03:53:49.446083 kubelet[2803]: I0421 03:53:49.446054 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p7md\" (UniqueName: \"kubernetes.io/projected/37728bfd-5eb9-4dde-9216-6343b39cb87a-kube-api-access-9p7md\") pod \"tigera-operator-687949b757-4x89k\" (UID: \"37728bfd-5eb9-4dde-9216-6343b39cb87a\") " pod="tigera-operator/tigera-operator-687949b757-4x89k" Apr 21 03:53:49.483535 systemd[1]: Started cri-containerd-6f3ccdc151b2132bdd788b3d95a0e02a7a21765cf5ec32074803350f5685b419.scope - libcontainer container 6f3ccdc151b2132bdd788b3d95a0e02a7a21765cf5ec32074803350f5685b419. Apr 21 03:53:49.538266 containerd[1583]: time="2026-04-21T03:53:49.538056249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s4f9m,Uid:be76ad84-4aca-4eee-8738-91592faebc14,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f3ccdc151b2132bdd788b3d95a0e02a7a21765cf5ec32074803350f5685b419\"" Apr 21 03:53:49.541678 kubelet[2803]: E0421 03:53:49.541095 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:49.559583 containerd[1583]: time="2026-04-21T03:53:49.559390555Z" level=info msg="CreateContainer within sandbox \"6f3ccdc151b2132bdd788b3d95a0e02a7a21765cf5ec32074803350f5685b419\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 03:53:49.579217 containerd[1583]: time="2026-04-21T03:53:49.579114215Z" level=info msg="Container 86d70a2e2ef0103ed9f467d47032bb46908a10263470f9d5583ea5c013640877: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:53:49.591374 containerd[1583]: time="2026-04-21T03:53:49.591247967Z" level=info msg="CreateContainer within sandbox \"6f3ccdc151b2132bdd788b3d95a0e02a7a21765cf5ec32074803350f5685b419\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"86d70a2e2ef0103ed9f467d47032bb46908a10263470f9d5583ea5c013640877\"" Apr 21 03:53:49.595215 containerd[1583]: time="2026-04-21T03:53:49.594122895Z" level=info msg="StartContainer for \"86d70a2e2ef0103ed9f467d47032bb46908a10263470f9d5583ea5c013640877\"" Apr 21 03:53:49.596766 containerd[1583]: time="2026-04-21T03:53:49.596700118Z" level=info msg="connecting to shim 86d70a2e2ef0103ed9f467d47032bb46908a10263470f9d5583ea5c013640877" address="unix:///run/containerd/s/04696cf88a589f541f14882a64f58a7d31530ad6149644265b86a2109ed4e84a" protocol=ttrpc version=3 Apr 21 03:53:49.624394 systemd[1]: Started cri-containerd-86d70a2e2ef0103ed9f467d47032bb46908a10263470f9d5583ea5c013640877.scope - libcontainer container 86d70a2e2ef0103ed9f467d47032bb46908a10263470f9d5583ea5c013640877. Apr 21 03:53:49.674542 containerd[1583]: time="2026-04-21T03:53:49.674411319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-687949b757-4x89k,Uid:37728bfd-5eb9-4dde-9216-6343b39cb87a,Namespace:tigera-operator,Attempt:0,}" Apr 21 03:53:49.711470 containerd[1583]: time="2026-04-21T03:53:49.711345311Z" level=info msg="connecting to shim ab89c0f06c4850d192b6bf3b5d363df3c4a806fd594e8bee9c0f358129434e34" address="unix:///run/containerd/s/98ea533df9ae53159841a7b1252230add14dfc75c3b61f5517c3ca9e0c5cdd96" namespace=k8s.io protocol=ttrpc version=3 Apr 21 03:53:49.717912 containerd[1583]: time="2026-04-21T03:53:49.716630039Z" level=info msg="StartContainer for \"86d70a2e2ef0103ed9f467d47032bb46908a10263470f9d5583ea5c013640877\" returns successfully" Apr 21 03:53:49.755052 systemd[1]: Started cri-containerd-ab89c0f06c4850d192b6bf3b5d363df3c4a806fd594e8bee9c0f358129434e34.scope - libcontainer container ab89c0f06c4850d192b6bf3b5d363df3c4a806fd594e8bee9c0f358129434e34. Apr 21 03:53:49.894715 containerd[1583]: time="2026-04-21T03:53:49.894087054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-687949b757-4x89k,Uid:37728bfd-5eb9-4dde-9216-6343b39cb87a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ab89c0f06c4850d192b6bf3b5d363df3c4a806fd594e8bee9c0f358129434e34\"" Apr 21 03:53:49.900673 containerd[1583]: time="2026-04-21T03:53:49.900604105Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\"" Apr 21 03:53:50.425543 kubelet[2803]: E0421 03:53:50.425435 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:53:50.456529 kubelet[2803]: I0421 03:53:50.455725 2803 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-s4f9m" podStartSLOduration=2.455318803 podStartE2EDuration="2.455318803s" podCreationTimestamp="2026-04-21 03:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 03:53:50.455232016 +0000 UTC m=+5.683552508" watchObservedRunningTime="2026-04-21 03:53:50.455318803 +0000 UTC m=+5.683639310" Apr 21 03:53:51.329909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3358121538.mount: Deactivated successfully. Apr 21 03:53:53.204472 containerd[1583]: time="2026-04-21T03:53:53.204287552Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:53.206900 containerd[1583]: time="2026-04-21T03:53:53.205495063Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.8: active requests=0, bytes read=41007543" Apr 21 03:53:53.211921 containerd[1583]: time="2026-04-21T03:53:53.210276405Z" level=info msg="ImageCreate event name:\"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:53.221466 containerd[1583]: time="2026-04-21T03:53:53.221081753Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:53:53.225260 containerd[1583]: time="2026-04-21T03:53:53.224812916Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.8\" with image id \"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\", repo tag \"quay.io/tigera/operator:v1.40.8\", repo digest \"quay.io/tigera/operator@sha256:ce8eeaa3e60794610f3851ee06d296575f7c2efef1e3e1f8ac751a1d87ab979c\", size \"41003538\" in 3.324130219s" Apr 21 03:53:53.225260 containerd[1583]: time="2026-04-21T03:53:53.225248579Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.8\" returns image reference \"sha256:31fe9f73b19b5c10bcbd8f050af2f52293dfee5571cebbb6e816bf013505b9cb\"" Apr 21 03:53:53.266955 containerd[1583]: time="2026-04-21T03:53:53.266590701Z" level=info msg="CreateContainer within sandbox \"ab89c0f06c4850d192b6bf3b5d363df3c4a806fd594e8bee9c0f358129434e34\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 03:53:53.331753 containerd[1583]: time="2026-04-21T03:53:53.331566141Z" level=info msg="Container e0ea0f6283f1ff4b8a696cd91b088f03b9c4790db5393d5599beea3a4c67d22f: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:53:53.374249 containerd[1583]: time="2026-04-21T03:53:53.373951620Z" level=info msg="CreateContainer within sandbox \"ab89c0f06c4850d192b6bf3b5d363df3c4a806fd594e8bee9c0f358129434e34\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e0ea0f6283f1ff4b8a696cd91b088f03b9c4790db5393d5599beea3a4c67d22f\"" Apr 21 03:53:53.377523 containerd[1583]: time="2026-04-21T03:53:53.377340668Z" level=info msg="StartContainer for \"e0ea0f6283f1ff4b8a696cd91b088f03b9c4790db5393d5599beea3a4c67d22f\"" Apr 21 03:53:53.379479 containerd[1583]: time="2026-04-21T03:53:53.379413090Z" level=info msg="connecting to shim e0ea0f6283f1ff4b8a696cd91b088f03b9c4790db5393d5599beea3a4c67d22f" address="unix:///run/containerd/s/98ea533df9ae53159841a7b1252230add14dfc75c3b61f5517c3ca9e0c5cdd96" protocol=ttrpc version=3 Apr 21 03:53:53.460934 systemd[1]: Started cri-containerd-e0ea0f6283f1ff4b8a696cd91b088f03b9c4790db5393d5599beea3a4c67d22f.scope - libcontainer container e0ea0f6283f1ff4b8a696cd91b088f03b9c4790db5393d5599beea3a4c67d22f. Apr 21 03:53:53.585002 containerd[1583]: time="2026-04-21T03:53:53.584748922Z" level=info msg="StartContainer for \"e0ea0f6283f1ff4b8a696cd91b088f03b9c4790db5393d5599beea3a4c67d22f\" returns successfully" Apr 21 03:54:01.914799 sudo[1780]: pam_unix(sudo:session): session closed for user root Apr 21 03:54:01.924357 sshd[1779]: Connection closed by 10.0.0.1 port 39756 Apr 21 03:54:01.925298 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Apr 21 03:54:01.961173 systemd[1]: sshd@6-10.0.0.123:22-10.0.0.1:39756.service: Deactivated successfully. Apr 21 03:54:02.033119 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 03:54:02.033819 systemd[1]: session-7.scope: Consumed 8.675s CPU time, 227.4M memory peak. Apr 21 03:54:02.046116 systemd-logind[1551]: Session 7 logged out. Waiting for processes to exit. Apr 21 03:54:02.051536 systemd-logind[1551]: Removed session 7. Apr 21 03:54:10.545996 kubelet[2803]: I0421 03:54:10.545038 2803 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-687949b757-4x89k" podStartSLOduration=18.215646783 podStartE2EDuration="21.54493626s" podCreationTimestamp="2026-04-21 03:53:49 +0000 UTC" firstStartedPulling="2026-04-21 03:53:49.898396935 +0000 UTC m=+5.126717433" lastFinishedPulling="2026-04-21 03:53:53.227686414 +0000 UTC m=+8.456006910" observedRunningTime="2026-04-21 03:53:54.544356078 +0000 UTC m=+9.772676598" watchObservedRunningTime="2026-04-21 03:54:10.54493626 +0000 UTC m=+25.773256747" Apr 21 03:54:10.618922 systemd[1]: Created slice kubepods-besteffort-poda2538c18_3e79_4788_88fa_92abf6e1cc46.slice - libcontainer container kubepods-besteffort-poda2538c18_3e79_4788_88fa_92abf6e1cc46.slice. Apr 21 03:54:10.731565 kubelet[2803]: I0421 03:54:10.730430 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2538c18-3e79-4788-88fa-92abf6e1cc46-tigera-ca-bundle\") pod \"calico-typha-574f8b6f89-4n4cn\" (UID: \"a2538c18-3e79-4788-88fa-92abf6e1cc46\") " pod="calico-system/calico-typha-574f8b6f89-4n4cn" Apr 21 03:54:10.735498 kubelet[2803]: I0421 03:54:10.734467 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a2538c18-3e79-4788-88fa-92abf6e1cc46-typha-certs\") pod \"calico-typha-574f8b6f89-4n4cn\" (UID: \"a2538c18-3e79-4788-88fa-92abf6e1cc46\") " pod="calico-system/calico-typha-574f8b6f89-4n4cn" Apr 21 03:54:10.735498 kubelet[2803]: I0421 03:54:10.734738 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nztkf\" (UniqueName: \"kubernetes.io/projected/a2538c18-3e79-4788-88fa-92abf6e1cc46-kube-api-access-nztkf\") pod \"calico-typha-574f8b6f89-4n4cn\" (UID: \"a2538c18-3e79-4788-88fa-92abf6e1cc46\") " pod="calico-system/calico-typha-574f8b6f89-4n4cn" Apr 21 03:54:11.151076 kubelet[2803]: I0421 03:54:11.150699 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9e52332e-8778-41d8-8edc-208df6eb07c7-node-certs\") pod \"calico-node-rlhhx\" (UID: \"9e52332e-8778-41d8-8edc-208df6eb07c7\") " pod="calico-system/calico-node-rlhhx" Apr 21 03:54:11.160943 kubelet[2803]: I0421 03:54:11.160718 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9e52332e-8778-41d8-8edc-208df6eb07c7-var-lib-calico\") pod \"calico-node-rlhhx\" (UID: \"9e52332e-8778-41d8-8edc-208df6eb07c7\") " pod="calico-system/calico-node-rlhhx" Apr 21 03:54:11.167054 kubelet[2803]: I0421 03:54:11.161062 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e52332e-8778-41d8-8edc-208df6eb07c7-lib-modules\") pod \"calico-node-rlhhx\" (UID: \"9e52332e-8778-41d8-8edc-208df6eb07c7\") " pod="calico-system/calico-node-rlhhx" Apr 21 03:54:11.167054 kubelet[2803]: I0421 03:54:11.161085 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e52332e-8778-41d8-8edc-208df6eb07c7-tigera-ca-bundle\") pod \"calico-node-rlhhx\" (UID: \"9e52332e-8778-41d8-8edc-208df6eb07c7\") " pod="calico-system/calico-node-rlhhx" Apr 21 03:54:11.167054 kubelet[2803]: I0421 03:54:11.161101 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9e52332e-8778-41d8-8edc-208df6eb07c7-cni-bin-dir\") pod \"calico-node-rlhhx\" (UID: \"9e52332e-8778-41d8-8edc-208df6eb07c7\") " pod="calico-system/calico-node-rlhhx" Apr 21 03:54:11.167054 kubelet[2803]: I0421 03:54:11.161129 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9e52332e-8778-41d8-8edc-208df6eb07c7-cni-net-dir\") pod \"calico-node-rlhhx\" (UID: \"9e52332e-8778-41d8-8edc-208df6eb07c7\") " pod="calico-system/calico-node-rlhhx" Apr 21 03:54:11.167054 kubelet[2803]: I0421 03:54:11.161140 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/9e52332e-8778-41d8-8edc-208df6eb07c7-nodeproc\") pod \"calico-node-rlhhx\" (UID: \"9e52332e-8778-41d8-8edc-208df6eb07c7\") " pod="calico-system/calico-node-rlhhx" Apr 21 03:54:11.170055 kubelet[2803]: I0421 03:54:11.161195 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9e52332e-8778-41d8-8edc-208df6eb07c7-flexvol-driver-host\") pod \"calico-node-rlhhx\" (UID: \"9e52332e-8778-41d8-8edc-208df6eb07c7\") " pod="calico-system/calico-node-rlhhx" Apr 21 03:54:11.170055 kubelet[2803]: I0421 03:54:11.161209 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/9e52332e-8778-41d8-8edc-208df6eb07c7-sys-fs\") pod \"calico-node-rlhhx\" (UID: \"9e52332e-8778-41d8-8edc-208df6eb07c7\") " pod="calico-system/calico-node-rlhhx" Apr 21 03:54:11.170055 kubelet[2803]: I0421 03:54:11.161223 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e52332e-8778-41d8-8edc-208df6eb07c7-xtables-lock\") pod \"calico-node-rlhhx\" (UID: \"9e52332e-8778-41d8-8edc-208df6eb07c7\") " pod="calico-system/calico-node-rlhhx" Apr 21 03:54:11.170055 kubelet[2803]: I0421 03:54:11.161234 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9e52332e-8778-41d8-8edc-208df6eb07c7-policysync\") pod \"calico-node-rlhhx\" (UID: \"9e52332e-8778-41d8-8edc-208df6eb07c7\") " pod="calico-system/calico-node-rlhhx" Apr 21 03:54:11.170055 kubelet[2803]: I0421 03:54:11.161249 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9e52332e-8778-41d8-8edc-208df6eb07c7-var-run-calico\") pod \"calico-node-rlhhx\" (UID: \"9e52332e-8778-41d8-8edc-208df6eb07c7\") " pod="calico-system/calico-node-rlhhx" Apr 21 03:54:11.251409 kubelet[2803]: I0421 03:54:11.161269 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78nxl\" (UniqueName: \"kubernetes.io/projected/9e52332e-8778-41d8-8edc-208df6eb07c7-kube-api-access-78nxl\") pod \"calico-node-rlhhx\" (UID: \"9e52332e-8778-41d8-8edc-208df6eb07c7\") " pod="calico-system/calico-node-rlhhx" Apr 21 03:54:11.251409 kubelet[2803]: I0421 03:54:11.161290 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/9e52332e-8778-41d8-8edc-208df6eb07c7-bpffs\") pod \"calico-node-rlhhx\" (UID: \"9e52332e-8778-41d8-8edc-208df6eb07c7\") " pod="calico-system/calico-node-rlhhx" Apr 21 03:54:11.251409 kubelet[2803]: I0421 03:54:11.161303 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9e52332e-8778-41d8-8edc-208df6eb07c7-cni-log-dir\") pod \"calico-node-rlhhx\" (UID: \"9e52332e-8778-41d8-8edc-208df6eb07c7\") " pod="calico-system/calico-node-rlhhx" Apr 21 03:54:11.231675 systemd[1]: Created slice kubepods-besteffort-pod9e52332e_8778_41d8_8edc_208df6eb07c7.slice - libcontainer container kubepods-besteffort-pod9e52332e_8778_41d8_8edc_208df6eb07c7.slice. Apr 21 03:54:11.322485 kubelet[2803]: E0421 03:54:11.322278 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.322485 kubelet[2803]: W0421 03:54:11.322433 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.323757 kubelet[2803]: E0421 03:54:11.322563 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.324295 kubelet[2803]: E0421 03:54:11.324244 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.324295 kubelet[2803]: W0421 03:54:11.324287 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.324385 kubelet[2803]: E0421 03:54:11.324343 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.324862 kubelet[2803]: E0421 03:54:11.324829 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.324862 kubelet[2803]: W0421 03:54:11.324847 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.324862 kubelet[2803]: E0421 03:54:11.324857 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.325945 kubelet[2803]: E0421 03:54:11.325604 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.325945 kubelet[2803]: W0421 03:54:11.325968 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.326563 kubelet[2803]: E0421 03:54:11.326068 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.326871 kubelet[2803]: E0421 03:54:11.326839 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.326871 kubelet[2803]: W0421 03:54:11.326860 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.326871 kubelet[2803]: E0421 03:54:11.326871 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.327225 kubelet[2803]: E0421 03:54:11.327202 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.327225 kubelet[2803]: W0421 03:54:11.327219 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.327225 kubelet[2803]: E0421 03:54:11.327226 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.327446 kubelet[2803]: E0421 03:54:11.327419 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.327446 kubelet[2803]: W0421 03:54:11.327434 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.327446 kubelet[2803]: E0421 03:54:11.327440 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.329214 kubelet[2803]: E0421 03:54:11.328785 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.329214 kubelet[2803]: W0421 03:54:11.329235 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.329718 kubelet[2803]: E0421 03:54:11.329365 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.330729 kubelet[2803]: E0421 03:54:11.330566 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.330729 kubelet[2803]: W0421 03:54:11.330676 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.330729 kubelet[2803]: E0421 03:54:11.330750 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.331375 kubelet[2803]: E0421 03:54:11.331346 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.331375 kubelet[2803]: W0421 03:54:11.331364 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.331375 kubelet[2803]: E0421 03:54:11.331373 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.332834 kubelet[2803]: E0421 03:54:11.332527 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.333643 kubelet[2803]: W0421 03:54:11.332871 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.333643 kubelet[2803]: E0421 03:54:11.332966 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.334022 kubelet[2803]: E0421 03:54:11.333919 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.334047 kubelet[2803]: W0421 03:54:11.334025 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.334208 kubelet[2803]: E0421 03:54:11.334177 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.334760 kubelet[2803]: E0421 03:54:11.334508 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.334760 kubelet[2803]: W0421 03:54:11.334736 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.335840 kubelet[2803]: E0421 03:54:11.334891 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.344073 kubelet[2803]: E0421 03:54:11.342894 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.344073 kubelet[2803]: W0421 03:54:11.343652 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.353581 kubelet[2803]: E0421 03:54:11.351241 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.363418 kubelet[2803]: E0421 03:54:11.362895 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.364682 kubelet[2803]: W0421 03:54:11.363367 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.364682 kubelet[2803]: E0421 03:54:11.363551 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.368528 kubelet[2803]: E0421 03:54:11.368336 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.368528 kubelet[2803]: W0421 03:54:11.368471 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.370007 kubelet[2803]: E0421 03:54:11.368712 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.376283 kubelet[2803]: E0421 03:54:11.375890 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.378996 kubelet[2803]: W0421 03:54:11.378888 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.379656 kubelet[2803]: E0421 03:54:11.379038 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.405799 kubelet[2803]: E0421 03:54:11.402737 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.405799 kubelet[2803]: W0421 03:54:11.402843 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.405799 kubelet[2803]: E0421 03:54:11.402912 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.405799 kubelet[2803]: E0421 03:54:11.403759 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.405799 kubelet[2803]: W0421 03:54:11.403868 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.405799 kubelet[2803]: E0421 03:54:11.404011 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.405799 kubelet[2803]: E0421 03:54:11.404681 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.405799 kubelet[2803]: W0421 03:54:11.404693 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.405799 kubelet[2803]: E0421 03:54:11.404708 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.405799 kubelet[2803]: E0421 03:54:11.405074 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.406510 kubelet[2803]: W0421 03:54:11.405096 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.406510 kubelet[2803]: E0421 03:54:11.405109 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.406510 kubelet[2803]: E0421 03:54:11.405321 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.406510 kubelet[2803]: W0421 03:54:11.405329 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.406510 kubelet[2803]: E0421 03:54:11.405339 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.406510 kubelet[2803]: E0421 03:54:11.405560 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.406510 kubelet[2803]: W0421 03:54:11.405568 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.406510 kubelet[2803]: E0421 03:54:11.405578 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.406742 kubelet[2803]: E0421 03:54:11.406619 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.406742 kubelet[2803]: W0421 03:54:11.406634 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.406742 kubelet[2803]: E0421 03:54:11.406650 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.409676 kubelet[2803]: E0421 03:54:11.409376 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.410472 kubelet[2803]: W0421 03:54:11.409735 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.410472 kubelet[2803]: E0421 03:54:11.409826 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.420321 kubelet[2803]: E0421 03:54:11.420262 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.420321 kubelet[2803]: W0421 03:54:11.420303 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.421194 kubelet[2803]: E0421 03:54:11.420544 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.421779 kubelet[2803]: E0421 03:54:11.421574 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.421779 kubelet[2803]: W0421 03:54:11.421588 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.421779 kubelet[2803]: E0421 03:54:11.421606 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.427099 kubelet[2803]: E0421 03:54:11.423729 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.427099 kubelet[2803]: W0421 03:54:11.423847 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.427099 kubelet[2803]: E0421 03:54:11.423905 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.427099 kubelet[2803]: E0421 03:54:11.426630 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.427099 kubelet[2803]: W0421 03:54:11.426729 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.427099 kubelet[2803]: E0421 03:54:11.426871 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.430346 kubelet[2803]: E0421 03:54:11.429651 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.430346 kubelet[2803]: W0421 03:54:11.429754 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.430346 kubelet[2803]: E0421 03:54:11.429853 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.430663 kubelet[2803]: E0421 03:54:11.430573 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.430663 kubelet[2803]: W0421 03:54:11.430581 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.430663 kubelet[2803]: E0421 03:54:11.430591 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.436135 kubelet[2803]: E0421 03:54:11.433926 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.436135 kubelet[2803]: W0421 03:54:11.434348 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.436135 kubelet[2803]: E0421 03:54:11.434493 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.436135 kubelet[2803]: E0421 03:54:11.435740 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.436135 kubelet[2803]: W0421 03:54:11.435882 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.436135 kubelet[2803]: E0421 03:54:11.435951 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.439581 kubelet[2803]: E0421 03:54:11.438484 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.439581 kubelet[2803]: W0421 03:54:11.438704 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.439581 kubelet[2803]: E0421 03:54:11.438967 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.442566 kubelet[2803]: E0421 03:54:11.441901 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.442566 kubelet[2803]: W0421 03:54:11.442395 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.442566 kubelet[2803]: E0421 03:54:11.442567 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.452042 kubelet[2803]: E0421 03:54:11.451503 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.453052 kubelet[2803]: W0421 03:54:11.452119 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.453052 kubelet[2803]: E0421 03:54:11.452665 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.459909 kubelet[2803]: E0421 03:54:11.459693 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.459909 kubelet[2803]: W0421 03:54:11.459845 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.463942 kubelet[2803]: E0421 03:54:11.459975 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.463942 kubelet[2803]: E0421 03:54:11.462534 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.463942 kubelet[2803]: W0421 03:54:11.462605 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.463942 kubelet[2803]: E0421 03:54:11.462707 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.466948 kubelet[2803]: E0421 03:54:11.464583 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.466948 kubelet[2803]: W0421 03:54:11.464721 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.466948 kubelet[2803]: E0421 03:54:11.464807 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.466948 kubelet[2803]: E0421 03:54:11.465681 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.466948 kubelet[2803]: W0421 03:54:11.465693 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.466948 kubelet[2803]: E0421 03:54:11.465706 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.466948 kubelet[2803]: E0421 03:54:11.466750 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.466948 kubelet[2803]: W0421 03:54:11.466838 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.466948 kubelet[2803]: E0421 03:54:11.466927 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.471796 kubelet[2803]: E0421 03:54:11.469976 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.471796 kubelet[2803]: W0421 03:54:11.470356 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.471796 kubelet[2803]: E0421 03:54:11.470420 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.471796 kubelet[2803]: E0421 03:54:11.470978 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.471796 kubelet[2803]: W0421 03:54:11.471000 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.471796 kubelet[2803]: E0421 03:54:11.471010 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.471796 kubelet[2803]: E0421 03:54:11.471230 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.471796 kubelet[2803]: W0421 03:54:11.471253 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.471796 kubelet[2803]: E0421 03:54:11.471326 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.522927 kubelet[2803]: E0421 03:54:11.507646 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.522927 kubelet[2803]: W0421 03:54:11.507926 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.522927 kubelet[2803]: E0421 03:54:11.508589 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.522927 kubelet[2803]: E0421 03:54:11.511647 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.522927 kubelet[2803]: W0421 03:54:11.511751 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.522927 kubelet[2803]: E0421 03:54:11.511822 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.522927 kubelet[2803]: E0421 03:54:11.515504 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.522927 kubelet[2803]: W0421 03:54:11.515696 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.522927 kubelet[2803]: E0421 03:54:11.515833 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.522927 kubelet[2803]: E0421 03:54:11.519544 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.524404 kubelet[2803]: W0421 03:54:11.519658 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.524404 kubelet[2803]: E0421 03:54:11.519761 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.524404 kubelet[2803]: E0421 03:54:11.523977 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.524404 kubelet[2803]: W0421 03:54:11.524193 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.524404 kubelet[2803]: E0421 03:54:11.524263 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.532074 kubelet[2803]: E0421 03:54:11.529387 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.532074 kubelet[2803]: W0421 03:54:11.529530 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.532074 kubelet[2803]: E0421 03:54:11.529612 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.535572 kubelet[2803]: E0421 03:54:11.533859 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.543134 kubelet[2803]: W0421 03:54:11.537091 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.543134 kubelet[2803]: E0421 03:54:11.539652 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.546606 kubelet[2803]: E0421 03:54:11.546246 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.546606 kubelet[2803]: W0421 03:54:11.546364 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.546606 kubelet[2803]: E0421 03:54:11.546512 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.549444 kubelet[2803]: E0421 03:54:11.547437 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.549444 kubelet[2803]: W0421 03:54:11.547456 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.549444 kubelet[2803]: E0421 03:54:11.547472 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.549444 kubelet[2803]: E0421 03:54:11.547827 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.549444 kubelet[2803]: W0421 03:54:11.547836 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.549444 kubelet[2803]: E0421 03:54:11.547847 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.549444 kubelet[2803]: E0421 03:54:11.549036 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.549444 kubelet[2803]: W0421 03:54:11.549125 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.549444 kubelet[2803]: E0421 03:54:11.549303 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.562887 kubelet[2803]: E0421 03:54:11.549851 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.562887 kubelet[2803]: W0421 03:54:11.549861 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.562887 kubelet[2803]: E0421 03:54:11.549923 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.562887 kubelet[2803]: E0421 03:54:11.556608 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.562887 kubelet[2803]: W0421 03:54:11.556854 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.562887 kubelet[2803]: E0421 03:54:11.557013 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.563973 kubelet[2803]: E0421 03:54:11.563922 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.564016 kubelet[2803]: W0421 03:54:11.563955 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.564048 kubelet[2803]: E0421 03:54:11.564031 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.566353 kubelet[2803]: E0421 03:54:11.557852 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:11.577752 kubelet[2803]: E0421 03:54:11.577640 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.577752 kubelet[2803]: W0421 03:54:11.577734 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.630486 kubelet[2803]: E0421 03:54:11.577922 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.630486 kubelet[2803]: E0421 03:54:11.578126 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.630486 kubelet[2803]: W0421 03:54:11.578135 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.630486 kubelet[2803]: E0421 03:54:11.578175 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.633546 kubelet[2803]: E0421 03:54:11.632508 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:54:11.633546 kubelet[2803]: E0421 03:54:11.633387 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.633546 kubelet[2803]: W0421 03:54:11.633431 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.633546 kubelet[2803]: E0421 03:54:11.633518 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.635755 containerd[1583]: time="2026-04-21T03:54:11.635652791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-574f8b6f89-4n4cn,Uid:a2538c18-3e79-4788-88fa-92abf6e1cc46,Namespace:calico-system,Attempt:0,}" Apr 21 03:54:11.649405 kubelet[2803]: E0421 03:54:11.648397 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.649405 kubelet[2803]: W0421 03:54:11.649397 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.651177 kubelet[2803]: E0421 03:54:11.650401 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.661610 kubelet[2803]: E0421 03:54:11.658384 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.661610 kubelet[2803]: W0421 03:54:11.658440 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.661610 kubelet[2803]: E0421 03:54:11.658550 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.666880 kubelet[2803]: E0421 03:54:11.665799 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.666880 kubelet[2803]: W0421 03:54:11.666440 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.666880 kubelet[2803]: E0421 03:54:11.666877 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.669911 kubelet[2803]: E0421 03:54:11.669735 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.669911 kubelet[2803]: W0421 03:54:11.669839 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.669911 kubelet[2803]: E0421 03:54:11.669859 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.673832 kubelet[2803]: E0421 03:54:11.673608 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.673832 kubelet[2803]: W0421 03:54:11.673783 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.674780 kubelet[2803]: E0421 03:54:11.673880 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.700836 kubelet[2803]: E0421 03:54:11.700637 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.700836 kubelet[2803]: W0421 03:54:11.700783 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.701599 kubelet[2803]: E0421 03:54:11.701041 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.701952 kubelet[2803]: E0421 03:54:11.701912 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.701952 kubelet[2803]: W0421 03:54:11.701938 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.702051 kubelet[2803]: E0421 03:54:11.701956 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.705290 kubelet[2803]: E0421 03:54:11.702386 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.705290 kubelet[2803]: W0421 03:54:11.702401 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.705290 kubelet[2803]: E0421 03:54:11.702414 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.705290 kubelet[2803]: E0421 03:54:11.702625 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.705290 kubelet[2803]: W0421 03:54:11.702635 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.705290 kubelet[2803]: E0421 03:54:11.702646 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.705290 kubelet[2803]: E0421 03:54:11.703027 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.705290 kubelet[2803]: W0421 03:54:11.703035 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.705290 kubelet[2803]: E0421 03:54:11.703045 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.705290 kubelet[2803]: E0421 03:54:11.704434 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.706029 kubelet[2803]: W0421 03:54:11.704546 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.706029 kubelet[2803]: E0421 03:54:11.704646 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.706029 kubelet[2803]: E0421 03:54:11.705115 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.706029 kubelet[2803]: W0421 03:54:11.705123 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.706029 kubelet[2803]: E0421 03:54:11.705132 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.706029 kubelet[2803]: E0421 03:54:11.705364 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.706029 kubelet[2803]: W0421 03:54:11.705370 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.706029 kubelet[2803]: E0421 03:54:11.705377 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.706029 kubelet[2803]: E0421 03:54:11.705488 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.706029 kubelet[2803]: W0421 03:54:11.705493 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.706323 kubelet[2803]: E0421 03:54:11.705499 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.706323 kubelet[2803]: E0421 03:54:11.705722 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.706323 kubelet[2803]: W0421 03:54:11.705728 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.706323 kubelet[2803]: E0421 03:54:11.705733 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.706323 kubelet[2803]: E0421 03:54:11.705867 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.706323 kubelet[2803]: W0421 03:54:11.705872 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.706323 kubelet[2803]: E0421 03:54:11.705878 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.706323 kubelet[2803]: E0421 03:54:11.706038 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.706323 kubelet[2803]: W0421 03:54:11.706044 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.706323 kubelet[2803]: E0421 03:54:11.706050 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.706565 kubelet[2803]: E0421 03:54:11.706128 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.706565 kubelet[2803]: W0421 03:54:11.706132 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.706565 kubelet[2803]: E0421 03:54:11.706137 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.706565 kubelet[2803]: E0421 03:54:11.706277 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.706565 kubelet[2803]: W0421 03:54:11.706282 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.706565 kubelet[2803]: E0421 03:54:11.706288 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.706565 kubelet[2803]: E0421 03:54:11.706500 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.706565 kubelet[2803]: W0421 03:54:11.706507 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.706565 kubelet[2803]: E0421 03:54:11.706513 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.710542 kubelet[2803]: E0421 03:54:11.710126 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.712598 kubelet[2803]: W0421 03:54:11.712378 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.713738 kubelet[2803]: E0421 03:54:11.713523 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.714609 kubelet[2803]: E0421 03:54:11.714596 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.714729 kubelet[2803]: W0421 03:54:11.714716 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.714799 kubelet[2803]: E0421 03:54:11.714770 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.714975 kubelet[2803]: E0421 03:54:11.714968 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.715034 kubelet[2803]: W0421 03:54:11.715027 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.715118 kubelet[2803]: E0421 03:54:11.715058 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.715268 kubelet[2803]: E0421 03:54:11.715262 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.715339 kubelet[2803]: W0421 03:54:11.715308 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.721091 kubelet[2803]: E0421 03:54:11.715366 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.745528 kubelet[2803]: E0421 03:54:11.745397 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.750390 kubelet[2803]: W0421 03:54:11.748515 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.756207 kubelet[2803]: E0421 03:54:11.754481 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.758568 kubelet[2803]: I0421 03:54:11.757641 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d682c086-ad9c-40b2-928f-12d71adad6b2-registration-dir\") pod \"csi-node-driver-c26j7\" (UID: \"d682c086-ad9c-40b2-928f-12d71adad6b2\") " pod="calico-system/csi-node-driver-c26j7" Apr 21 03:54:11.770044 kubelet[2803]: E0421 03:54:11.769344 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.825240 kubelet[2803]: W0421 03:54:11.769925 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.826254 kubelet[2803]: E0421 03:54:11.826178 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.826871 kubelet[2803]: E0421 03:54:11.826855 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.827013 kubelet[2803]: W0421 03:54:11.827002 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.827069 kubelet[2803]: E0421 03:54:11.827052 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.829376 kubelet[2803]: E0421 03:54:11.828975 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.831305 kubelet[2803]: W0421 03:54:11.830588 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.832784 kubelet[2803]: E0421 03:54:11.832541 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.839728 kubelet[2803]: I0421 03:54:11.837865 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d682c086-ad9c-40b2-928f-12d71adad6b2-kubelet-dir\") pod \"csi-node-driver-c26j7\" (UID: \"d682c086-ad9c-40b2-928f-12d71adad6b2\") " pod="calico-system/csi-node-driver-c26j7" Apr 21 03:54:11.845339 kubelet[2803]: E0421 03:54:11.844813 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.846550 kubelet[2803]: W0421 03:54:11.846267 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.849882 kubelet[2803]: E0421 03:54:11.848409 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.855070 kubelet[2803]: E0421 03:54:11.853644 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.857378 kubelet[2803]: W0421 03:54:11.854049 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.857826 kubelet[2803]: E0421 03:54:11.857415 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.860876 kubelet[2803]: E0421 03:54:11.859488 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.860876 kubelet[2803]: W0421 03:54:11.859842 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.860876 kubelet[2803]: E0421 03:54:11.860084 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.865594 kubelet[2803]: I0421 03:54:11.861658 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d682c086-ad9c-40b2-928f-12d71adad6b2-socket-dir\") pod \"csi-node-driver-c26j7\" (UID: \"d682c086-ad9c-40b2-928f-12d71adad6b2\") " pod="calico-system/csi-node-driver-c26j7" Apr 21 03:54:11.871907 kubelet[2803]: E0421 03:54:11.871477 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.871907 kubelet[2803]: W0421 03:54:11.871511 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.871907 kubelet[2803]: E0421 03:54:11.871596 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.871907 kubelet[2803]: E0421 03:54:11.871904 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.871907 kubelet[2803]: W0421 03:54:11.871917 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.871907 kubelet[2803]: E0421 03:54:11.871937 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.877256 kubelet[2803]: E0421 03:54:11.876904 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.880603 kubelet[2803]: W0421 03:54:11.879765 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.903710 kubelet[2803]: E0421 03:54:11.901635 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.903710 kubelet[2803]: E0421 03:54:11.902906 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.903710 kubelet[2803]: W0421 03:54:11.902924 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.903710 kubelet[2803]: E0421 03:54:11.902943 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.906598 containerd[1583]: time="2026-04-21T03:54:11.906318460Z" level=info msg="connecting to shim ca08d2c13d745e996ec07333d9bee449b80541d328b862e2a1145d86fcaffbff" address="unix:///run/containerd/s/68d9155c87e444a5d51db72c0f0d77bc3f39f46748fd228d290e86a9ff7b56ce" namespace=k8s.io protocol=ttrpc version=3 Apr 21 03:54:11.909186 kubelet[2803]: E0421 03:54:11.909007 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.909186 kubelet[2803]: W0421 03:54:11.909049 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.909471 kubelet[2803]: E0421 03:54:11.909456 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.910315 kubelet[2803]: E0421 03:54:11.910276 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.910315 kubelet[2803]: W0421 03:54:11.910289 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.910315 kubelet[2803]: E0421 03:54:11.910302 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.910829 kubelet[2803]: E0421 03:54:11.910733 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.910829 kubelet[2803]: W0421 03:54:11.910757 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.910829 kubelet[2803]: E0421 03:54:11.910776 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.911268 kubelet[2803]: E0421 03:54:11.911244 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.911268 kubelet[2803]: W0421 03:54:11.911265 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.911351 kubelet[2803]: E0421 03:54:11.911279 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.911484 kubelet[2803]: E0421 03:54:11.911463 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.911515 kubelet[2803]: W0421 03:54:11.911483 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.911515 kubelet[2803]: E0421 03:54:11.911495 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.912459 kubelet[2803]: E0421 03:54:11.912273 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:11.912459 kubelet[2803]: W0421 03:54:11.912423 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:11.912858 kubelet[2803]: E0421 03:54:11.912534 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:11.922285 containerd[1583]: time="2026-04-21T03:54:11.922215878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rlhhx,Uid:9e52332e-8778-41d8-8edc-208df6eb07c7,Namespace:calico-system,Attempt:0,}" Apr 21 03:54:12.019075 kubelet[2803]: E0421 03:54:12.018942 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.019075 kubelet[2803]: W0421 03:54:12.018980 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.019075 kubelet[2803]: E0421 03:54:12.019124 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.023466 kubelet[2803]: E0421 03:54:12.021712 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.023466 kubelet[2803]: W0421 03:54:12.022043 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.023466 kubelet[2803]: E0421 03:54:12.022215 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.023466 kubelet[2803]: I0421 03:54:12.022491 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d682c086-ad9c-40b2-928f-12d71adad6b2-varrun\") pod \"csi-node-driver-c26j7\" (UID: \"d682c086-ad9c-40b2-928f-12d71adad6b2\") " pod="calico-system/csi-node-driver-c26j7" Apr 21 03:54:12.025456 kubelet[2803]: E0421 03:54:12.025308 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.025456 kubelet[2803]: W0421 03:54:12.025430 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.026035 kubelet[2803]: E0421 03:54:12.025543 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.026035 kubelet[2803]: I0421 03:54:12.025658 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kd8r\" (UniqueName: \"kubernetes.io/projected/d682c086-ad9c-40b2-928f-12d71adad6b2-kube-api-access-2kd8r\") pod \"csi-node-driver-c26j7\" (UID: \"d682c086-ad9c-40b2-928f-12d71adad6b2\") " pod="calico-system/csi-node-driver-c26j7" Apr 21 03:54:12.030273 kubelet[2803]: E0421 03:54:12.029823 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.030273 kubelet[2803]: W0421 03:54:12.030140 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.030273 kubelet[2803]: E0421 03:54:12.030295 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.031568 kubelet[2803]: E0421 03:54:12.031519 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.031568 kubelet[2803]: W0421 03:54:12.031551 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.031568 kubelet[2803]: E0421 03:54:12.031567 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.031947 containerd[1583]: time="2026-04-21T03:54:12.031675450Z" level=info msg="connecting to shim 509c08504e83db7cea0aa095ae155a539a101b2c01a74678e20b5068c76c9768" address="unix:///run/containerd/s/8284e49d8b4a8d4805e2be994ad20702c04187586152b67114c270f96955b725" namespace=k8s.io protocol=ttrpc version=3 Apr 21 03:54:12.033688 kubelet[2803]: E0421 03:54:12.033618 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.033688 kubelet[2803]: W0421 03:54:12.033672 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.034015 kubelet[2803]: E0421 03:54:12.033818 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.034376 kubelet[2803]: E0421 03:54:12.034245 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.034376 kubelet[2803]: W0421 03:54:12.034259 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.034376 kubelet[2803]: E0421 03:54:12.034273 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.034722 kubelet[2803]: E0421 03:54:12.034518 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.039769 kubelet[2803]: W0421 03:54:12.037502 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.039769 kubelet[2803]: E0421 03:54:12.037883 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.044306 kubelet[2803]: E0421 03:54:12.043571 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.046400 kubelet[2803]: W0421 03:54:12.043919 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.046400 kubelet[2803]: E0421 03:54:12.045556 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.050214 kubelet[2803]: E0421 03:54:12.049660 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.051622 kubelet[2803]: W0421 03:54:12.050145 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.053833 kubelet[2803]: E0421 03:54:12.051708 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.055300 kubelet[2803]: E0421 03:54:12.054690 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.057467 kubelet[2803]: W0421 03:54:12.056106 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.057467 kubelet[2803]: E0421 03:54:12.056983 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.060718 kubelet[2803]: E0421 03:54:12.060400 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.060718 kubelet[2803]: W0421 03:54:12.060614 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.062913 kubelet[2803]: E0421 03:54:12.061574 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.067259 kubelet[2803]: E0421 03:54:12.066872 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.068524 kubelet[2803]: W0421 03:54:12.068287 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.069765 kubelet[2803]: E0421 03:54:12.068893 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.072486 kubelet[2803]: E0421 03:54:12.072242 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.072486 kubelet[2803]: W0421 03:54:12.072351 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.072486 kubelet[2803]: E0421 03:54:12.072406 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.075813 kubelet[2803]: E0421 03:54:12.075082 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.075813 kubelet[2803]: W0421 03:54:12.075301 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.075813 kubelet[2803]: E0421 03:54:12.075434 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.077598 kubelet[2803]: E0421 03:54:12.077494 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.078083 kubelet[2803]: W0421 03:54:12.078012 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.078083 kubelet[2803]: E0421 03:54:12.078060 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.078542 kubelet[2803]: E0421 03:54:12.078528 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.078673 kubelet[2803]: W0421 03:54:12.078658 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.113759 kubelet[2803]: E0421 03:54:12.080606 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.124846 kubelet[2803]: E0421 03:54:12.123907 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.124846 kubelet[2803]: W0421 03:54:12.124611 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.124846 kubelet[2803]: E0421 03:54:12.124815 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.126083 kubelet[2803]: E0421 03:54:12.126048 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.126083 kubelet[2803]: W0421 03:54:12.126077 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.126174 kubelet[2803]: E0421 03:54:12.126093 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.126656 kubelet[2803]: E0421 03:54:12.126621 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.126656 kubelet[2803]: W0421 03:54:12.126650 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.126656 kubelet[2803]: E0421 03:54:12.126667 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.127215 kubelet[2803]: E0421 03:54:12.127187 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.127261 kubelet[2803]: W0421 03:54:12.127219 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.127261 kubelet[2803]: E0421 03:54:12.127232 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.136307 kubelet[2803]: E0421 03:54:12.135551 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.138016 kubelet[2803]: W0421 03:54:12.136019 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.138016 kubelet[2803]: E0421 03:54:12.137669 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.151245 kubelet[2803]: E0421 03:54:12.150444 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.152462 kubelet[2803]: W0421 03:54:12.151276 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.152462 kubelet[2803]: E0421 03:54:12.151984 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.156095 kubelet[2803]: E0421 03:54:12.155835 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.157789 kubelet[2803]: W0421 03:54:12.157724 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.158761 kubelet[2803]: E0421 03:54:12.158576 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.164298 kubelet[2803]: E0421 03:54:12.162915 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.164298 kubelet[2803]: W0421 03:54:12.162971 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.167535 kubelet[2803]: E0421 03:54:12.163090 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.171392 kubelet[2803]: E0421 03:54:12.170429 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.171392 kubelet[2803]: W0421 03:54:12.170511 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.171392 kubelet[2803]: E0421 03:54:12.170642 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.177708 kubelet[2803]: E0421 03:54:12.177467 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.180007 kubelet[2803]: W0421 03:54:12.178757 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.180007 kubelet[2803]: E0421 03:54:12.178873 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.180489 kubelet[2803]: E0421 03:54:12.180434 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.180489 kubelet[2803]: W0421 03:54:12.180447 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.180489 kubelet[2803]: E0421 03:54:12.180473 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.180714 kubelet[2803]: E0421 03:54:12.180696 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.180714 kubelet[2803]: W0421 03:54:12.180708 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.181415 kubelet[2803]: E0421 03:54:12.180718 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.181415 kubelet[2803]: E0421 03:54:12.180970 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.181415 kubelet[2803]: W0421 03:54:12.180978 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.181415 kubelet[2803]: E0421 03:54:12.181006 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.181415 kubelet[2803]: E0421 03:54:12.181232 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.181415 kubelet[2803]: W0421 03:54:12.181238 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.181415 kubelet[2803]: E0421 03:54:12.181245 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.232588 systemd[1]: Started cri-containerd-509c08504e83db7cea0aa095ae155a539a101b2c01a74678e20b5068c76c9768.scope - libcontainer container 509c08504e83db7cea0aa095ae155a539a101b2c01a74678e20b5068c76c9768. Apr 21 03:54:12.234646 systemd[1]: Started cri-containerd-ca08d2c13d745e996ec07333d9bee449b80541d328b862e2a1145d86fcaffbff.scope - libcontainer container ca08d2c13d745e996ec07333d9bee449b80541d328b862e2a1145d86fcaffbff. Apr 21 03:54:12.303874 kubelet[2803]: E0421 03:54:12.247781 2803 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 03:54:12.303874 kubelet[2803]: W0421 03:54:12.271890 2803 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 03:54:12.303874 kubelet[2803]: E0421 03:54:12.273549 2803 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 03:54:12.648539 containerd[1583]: time="2026-04-21T03:54:12.647901345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rlhhx,Uid:9e52332e-8778-41d8-8edc-208df6eb07c7,Namespace:calico-system,Attempt:0,} returns sandbox id \"509c08504e83db7cea0aa095ae155a539a101b2c01a74678e20b5068c76c9768\"" Apr 21 03:54:12.670858 containerd[1583]: time="2026-04-21T03:54:12.670461014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\"" Apr 21 03:54:12.699896 containerd[1583]: time="2026-04-21T03:54:12.699476108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-574f8b6f89-4n4cn,Uid:a2538c18-3e79-4788-88fa-92abf6e1cc46,Namespace:calico-system,Attempt:0,} returns sandbox id \"ca08d2c13d745e996ec07333d9bee449b80541d328b862e2a1145d86fcaffbff\"" Apr 21 03:54:12.702481 kubelet[2803]: E0421 03:54:12.702190 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:54:13.220097 kubelet[2803]: E0421 03:54:13.219674 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:14.863841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2272330128.mount: Deactivated successfully. Apr 21 03:54:15.236010 kubelet[2803]: E0421 03:54:15.235500 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:15.563893 containerd[1583]: time="2026-04-21T03:54:15.558718995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:54:15.572906 containerd[1583]: time="2026-04-21T03:54:15.572063540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5: active requests=0, bytes read=7563544" Apr 21 03:54:15.579377 containerd[1583]: time="2026-04-21T03:54:15.578351931Z" level=info msg="ImageCreate event name:\"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:54:15.610865 containerd[1583]: time="2026-04-21T03:54:15.610368673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:54:15.621712 containerd[1583]: time="2026-04-21T03:54:15.620655870Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" with image id \"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:df00fee6895ac073066d91243f29733e71f479317cacef49d50c244bb2d21ea1\", size \"7563366\" in 2.949032691s" Apr 21 03:54:15.621712 containerd[1583]: time="2026-04-21T03:54:15.621135288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.5\" returns image reference \"sha256:a8eb0feebda3c272a6a24ff173b5058ff04cbc78cfbf08befb26f6548ef76625\"" Apr 21 03:54:15.635306 containerd[1583]: time="2026-04-21T03:54:15.634826602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\"" Apr 21 03:54:15.742985 containerd[1583]: time="2026-04-21T03:54:15.742011890Z" level=info msg="CreateContainer within sandbox \"509c08504e83db7cea0aa095ae155a539a101b2c01a74678e20b5068c76c9768\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 03:54:15.919657 containerd[1583]: time="2026-04-21T03:54:15.918921579Z" level=info msg="Container 71beea63552f15f2b2560b8c6536cb84f8e855ad5aa33cb905d1af703ee72205: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:54:16.100874 containerd[1583]: time="2026-04-21T03:54:16.100330858Z" level=info msg="CreateContainer within sandbox \"509c08504e83db7cea0aa095ae155a539a101b2c01a74678e20b5068c76c9768\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"71beea63552f15f2b2560b8c6536cb84f8e855ad5aa33cb905d1af703ee72205\"" Apr 21 03:54:16.108218 containerd[1583]: time="2026-04-21T03:54:16.107883871Z" level=info msg="StartContainer for \"71beea63552f15f2b2560b8c6536cb84f8e855ad5aa33cb905d1af703ee72205\"" Apr 21 03:54:16.177522 containerd[1583]: time="2026-04-21T03:54:16.174853105Z" level=info msg="connecting to shim 71beea63552f15f2b2560b8c6536cb84f8e855ad5aa33cb905d1af703ee72205" address="unix:///run/containerd/s/8284e49d8b4a8d4805e2be994ad20702c04187586152b67114c270f96955b725" protocol=ttrpc version=3 Apr 21 03:54:16.714503 systemd[1]: Started cri-containerd-71beea63552f15f2b2560b8c6536cb84f8e855ad5aa33cb905d1af703ee72205.scope - libcontainer container 71beea63552f15f2b2560b8c6536cb84f8e855ad5aa33cb905d1af703ee72205. Apr 21 03:54:17.217134 kubelet[2803]: E0421 03:54:17.216101 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:17.412545 containerd[1583]: time="2026-04-21T03:54:17.412244295Z" level=info msg="StartContainer for \"71beea63552f15f2b2560b8c6536cb84f8e855ad5aa33cb905d1af703ee72205\" returns successfully" Apr 21 03:54:17.530127 systemd[1]: cri-containerd-71beea63552f15f2b2560b8c6536cb84f8e855ad5aa33cb905d1af703ee72205.scope: Deactivated successfully. Apr 21 03:54:17.535196 systemd[1]: cri-containerd-71beea63552f15f2b2560b8c6536cb84f8e855ad5aa33cb905d1af703ee72205.scope: Consumed 306ms CPU time, 6.1M memory peak, 309K read from disk, 4.6M written to disk. Apr 21 03:54:17.567014 containerd[1583]: time="2026-04-21T03:54:17.566682792Z" level=info msg="received container exit event container_id:\"71beea63552f15f2b2560b8c6536cb84f8e855ad5aa33cb905d1af703ee72205\" id:\"71beea63552f15f2b2560b8c6536cb84f8e855ad5aa33cb905d1af703ee72205\" pid:3517 exited_at:{seconds:1776743657 nanos:557216519}" Apr 21 03:54:18.270420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71beea63552f15f2b2560b8c6536cb84f8e855ad5aa33cb905d1af703ee72205-rootfs.mount: Deactivated successfully. Apr 21 03:54:19.219949 kubelet[2803]: E0421 03:54:19.218763 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:21.219553 kubelet[2803]: E0421 03:54:21.219187 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:22.356281 containerd[1583]: time="2026-04-21T03:54:22.355538170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:54:22.361120 containerd[1583]: time="2026-04-21T03:54:22.360632429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.5: active requests=0, bytes read=32851576" Apr 21 03:54:22.364915 containerd[1583]: time="2026-04-21T03:54:22.364596706Z" level=info msg="ImageCreate event name:\"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:54:22.367727 containerd[1583]: time="2026-04-21T03:54:22.367537366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:54:22.367913 containerd[1583]: time="2026-04-21T03:54:22.367819160Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.5\" with image id \"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:76afd8f80569b3bf783991ce5348294319cefa6d6cca127710d0e068096048a6\", size \"35812993\" in 6.732527973s" Apr 21 03:54:22.367913 containerd[1583]: time="2026-04-21T03:54:22.367889882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.5\" returns image reference \"sha256:20cad3a3c174ee02dd6e103e3a7e314ada245d5e414fef6d049c10829d8856dc\"" Apr 21 03:54:22.373790 containerd[1583]: time="2026-04-21T03:54:22.373564287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\"" Apr 21 03:54:22.526401 containerd[1583]: time="2026-04-21T03:54:22.525491275Z" level=info msg="CreateContainer within sandbox \"ca08d2c13d745e996ec07333d9bee449b80541d328b862e2a1145d86fcaffbff\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 03:54:22.685063 containerd[1583]: time="2026-04-21T03:54:22.684886051Z" level=info msg="Container 8e19efea7cd47d5415f90d2052fbd8695621abc72bc3851bca15aec108ce5cba: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:54:22.707585 containerd[1583]: time="2026-04-21T03:54:22.706949163Z" level=info msg="CreateContainer within sandbox \"ca08d2c13d745e996ec07333d9bee449b80541d328b862e2a1145d86fcaffbff\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8e19efea7cd47d5415f90d2052fbd8695621abc72bc3851bca15aec108ce5cba\"" Apr 21 03:54:22.710806 containerd[1583]: time="2026-04-21T03:54:22.710587195Z" level=info msg="StartContainer for \"8e19efea7cd47d5415f90d2052fbd8695621abc72bc3851bca15aec108ce5cba\"" Apr 21 03:54:22.721488 containerd[1583]: time="2026-04-21T03:54:22.721057404Z" level=info msg="connecting to shim 8e19efea7cd47d5415f90d2052fbd8695621abc72bc3851bca15aec108ce5cba" address="unix:///run/containerd/s/68d9155c87e444a5d51db72c0f0d77bc3f39f46748fd228d290e86a9ff7b56ce" protocol=ttrpc version=3 Apr 21 03:54:22.902940 systemd[1]: Started cri-containerd-8e19efea7cd47d5415f90d2052fbd8695621abc72bc3851bca15aec108ce5cba.scope - libcontainer container 8e19efea7cd47d5415f90d2052fbd8695621abc72bc3851bca15aec108ce5cba. Apr 21 03:54:23.122007 containerd[1583]: time="2026-04-21T03:54:23.121604642Z" level=info msg="StartContainer for \"8e19efea7cd47d5415f90d2052fbd8695621abc72bc3851bca15aec108ce5cba\" returns successfully" Apr 21 03:54:23.244749 kubelet[2803]: E0421 03:54:23.244211 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:24.007360 kubelet[2803]: E0421 03:54:24.006786 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:54:25.045758 kubelet[2803]: E0421 03:54:25.045236 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:54:25.125711 kubelet[2803]: I0421 03:54:25.120809 2803 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-574f8b6f89-4n4cn" podStartSLOduration=5.454977446 podStartE2EDuration="15.120593138s" podCreationTimestamp="2026-04-21 03:54:10 +0000 UTC" firstStartedPulling="2026-04-21 03:54:12.70712397 +0000 UTC m=+27.935444467" lastFinishedPulling="2026-04-21 03:54:22.372739667 +0000 UTC m=+37.601060159" observedRunningTime="2026-04-21 03:54:24.337103093 +0000 UTC m=+39.566193740" watchObservedRunningTime="2026-04-21 03:54:25.120593138 +0000 UTC m=+40.348913646" Apr 21 03:54:25.251524 kubelet[2803]: E0421 03:54:25.251052 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:26.091777 kubelet[2803]: E0421 03:54:26.091405 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:54:27.122482 kubelet[2803]: E0421 03:54:27.122423 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:54:27.217386 kubelet[2803]: E0421 03:54:27.216839 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:29.221133 kubelet[2803]: E0421 03:54:29.220842 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:31.263428 kubelet[2803]: E0421 03:54:31.263052 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:33.220621 kubelet[2803]: E0421 03:54:33.220316 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:35.217705 kubelet[2803]: E0421 03:54:35.217088 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:37.217550 kubelet[2803]: E0421 03:54:37.216981 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:39.223655 kubelet[2803]: E0421 03:54:39.219188 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:39.737046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3298295481.mount: Deactivated successfully. Apr 21 03:54:39.918984 containerd[1583]: time="2026-04-21T03:54:39.918681503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:54:39.922012 containerd[1583]: time="2026-04-21T03:54:39.920501336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.5: active requests=0, bytes read=159374404" Apr 21 03:54:39.923982 containerd[1583]: time="2026-04-21T03:54:39.923547772Z" level=info msg="ImageCreate event name:\"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:54:39.940456 containerd[1583]: time="2026-04-21T03:54:39.939894777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:54:39.971909 containerd[1583]: time="2026-04-21T03:54:39.971366331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.5\" with image id \"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e2426b97a645ed620e0f4035d594f2f3344b0547cd3dc3458f45e06d5cebdad7\", size \"159374266\" in 17.597255643s" Apr 21 03:54:39.972980 containerd[1583]: time="2026-04-21T03:54:39.971752810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.5\" returns image reference \"sha256:cfa3bb2488693bde06ff066d7e0912d23ef7e2aa2c2778dfcd5591694d840c19\"" Apr 21 03:54:40.033141 containerd[1583]: time="2026-04-21T03:54:40.031835498Z" level=info msg="CreateContainer within sandbox \"509c08504e83db7cea0aa095ae155a539a101b2c01a74678e20b5068c76c9768\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 03:54:40.318375 containerd[1583]: time="2026-04-21T03:54:40.315838406Z" level=info msg="Container 9530fba58b0a17c1debab6b6750ec2890173379574657bdec7039c13dd661bb8: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:54:40.402678 containerd[1583]: time="2026-04-21T03:54:40.402449563Z" level=info msg="CreateContainer within sandbox \"509c08504e83db7cea0aa095ae155a539a101b2c01a74678e20b5068c76c9768\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"9530fba58b0a17c1debab6b6750ec2890173379574657bdec7039c13dd661bb8\"" Apr 21 03:54:40.408808 containerd[1583]: time="2026-04-21T03:54:40.408485568Z" level=info msg="StartContainer for \"9530fba58b0a17c1debab6b6750ec2890173379574657bdec7039c13dd661bb8\"" Apr 21 03:54:40.452730 containerd[1583]: time="2026-04-21T03:54:40.451209201Z" level=info msg="connecting to shim 9530fba58b0a17c1debab6b6750ec2890173379574657bdec7039c13dd661bb8" address="unix:///run/containerd/s/8284e49d8b4a8d4805e2be994ad20702c04187586152b67114c270f96955b725" protocol=ttrpc version=3 Apr 21 03:54:40.680750 systemd[1]: Started cri-containerd-9530fba58b0a17c1debab6b6750ec2890173379574657bdec7039c13dd661bb8.scope - libcontainer container 9530fba58b0a17c1debab6b6750ec2890173379574657bdec7039c13dd661bb8. Apr 21 03:54:41.047666 containerd[1583]: time="2026-04-21T03:54:41.047226642Z" level=info msg="StartContainer for \"9530fba58b0a17c1debab6b6750ec2890173379574657bdec7039c13dd661bb8\" returns successfully" Apr 21 03:54:41.231366 kubelet[2803]: E0421 03:54:41.231105 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:41.250964 systemd[1]: cri-containerd-9530fba58b0a17c1debab6b6750ec2890173379574657bdec7039c13dd661bb8.scope: Deactivated successfully. Apr 21 03:54:41.277587 containerd[1583]: time="2026-04-21T03:54:41.277371647Z" level=info msg="received container exit event container_id:\"9530fba58b0a17c1debab6b6750ec2890173379574657bdec7039c13dd661bb8\" id:\"9530fba58b0a17c1debab6b6750ec2890173379574657bdec7039c13dd661bb8\" pid:3623 exited_at:{seconds:1776743681 nanos:253539776}" Apr 21 03:54:41.432976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9530fba58b0a17c1debab6b6750ec2890173379574657bdec7039c13dd661bb8-rootfs.mount: Deactivated successfully. Apr 21 03:54:41.628835 containerd[1583]: time="2026-04-21T03:54:41.628100740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\"" Apr 21 03:54:43.218605 kubelet[2803]: E0421 03:54:43.217851 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:45.235231 kubelet[2803]: E0421 03:54:45.216098 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:47.268514 kubelet[2803]: E0421 03:54:47.266490 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:49.273657 kubelet[2803]: E0421 03:54:49.272535 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:51.080036 containerd[1583]: time="2026-04-21T03:54:51.079521844Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:54:51.084044 containerd[1583]: time="2026-04-21T03:54:51.081032051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.5: active requests=0, bytes read=67713351" Apr 21 03:54:51.085123 containerd[1583]: time="2026-04-21T03:54:51.084212898Z" level=info msg="ImageCreate event name:\"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:54:51.136833 containerd[1583]: time="2026-04-21T03:54:51.136140863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:54:51.142191 containerd[1583]: time="2026-04-21T03:54:51.141640636Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.5\" with image id \"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:ea8a6b721af629c1dab2e1559b93cd843d9a4b640726115380fc23cf47e83232\", size \"70674776\" in 9.512825917s" Apr 21 03:54:51.142191 containerd[1583]: time="2026-04-21T03:54:51.142120505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.5\" returns image reference \"sha256:f2487068e96f7fdaaf9d02dc114f17cdae3737bb42f1ba06d079d2d2068734b6\"" Apr 21 03:54:51.220119 containerd[1583]: time="2026-04-21T03:54:51.215475942Z" level=info msg="CreateContainer within sandbox \"509c08504e83db7cea0aa095ae155a539a101b2c01a74678e20b5068c76c9768\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 03:54:51.221246 kubelet[2803]: E0421 03:54:51.218468 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:51.308976 containerd[1583]: time="2026-04-21T03:54:51.308891741Z" level=info msg="Container 8db344957848745af95c30ca903418d5227e5f0f4376877b640807e49dc1afc4: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:54:51.382526 containerd[1583]: time="2026-04-21T03:54:51.380784188Z" level=info msg="CreateContainer within sandbox \"509c08504e83db7cea0aa095ae155a539a101b2c01a74678e20b5068c76c9768\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8db344957848745af95c30ca903418d5227e5f0f4376877b640807e49dc1afc4\"" Apr 21 03:54:51.384254 containerd[1583]: time="2026-04-21T03:54:51.383876009Z" level=info msg="StartContainer for \"8db344957848745af95c30ca903418d5227e5f0f4376877b640807e49dc1afc4\"" Apr 21 03:54:51.433065 containerd[1583]: time="2026-04-21T03:54:51.431043392Z" level=info msg="connecting to shim 8db344957848745af95c30ca903418d5227e5f0f4376877b640807e49dc1afc4" address="unix:///run/containerd/s/8284e49d8b4a8d4805e2be994ad20702c04187586152b67114c270f96955b725" protocol=ttrpc version=3 Apr 21 03:54:51.579043 systemd[1]: Started cri-containerd-8db344957848745af95c30ca903418d5227e5f0f4376877b640807e49dc1afc4.scope - libcontainer container 8db344957848745af95c30ca903418d5227e5f0f4376877b640807e49dc1afc4. Apr 21 03:54:51.853436 containerd[1583]: time="2026-04-21T03:54:51.852959905Z" level=info msg="StartContainer for \"8db344957848745af95c30ca903418d5227e5f0f4376877b640807e49dc1afc4\" returns successfully" Apr 21 03:54:53.219843 kubelet[2803]: E0421 03:54:53.219102 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:55.221211 kubelet[2803]: E0421 03:54:55.220866 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:54:57.097958 systemd[1]: cri-containerd-8db344957848745af95c30ca903418d5227e5f0f4376877b640807e49dc1afc4.scope: Deactivated successfully. Apr 21 03:54:57.101529 systemd[1]: cri-containerd-8db344957848745af95c30ca903418d5227e5f0f4376877b640807e49dc1afc4.scope: Consumed 4.223s CPU time, 178.7M memory peak, 3.7M read from disk, 173.7M written to disk. Apr 21 03:54:57.112787 containerd[1583]: time="2026-04-21T03:54:57.112172479Z" level=info msg="received container exit event container_id:\"8db344957848745af95c30ca903418d5227e5f0f4376877b640807e49dc1afc4\" id:\"8db344957848745af95c30ca903418d5227e5f0f4376877b640807e49dc1afc4\" pid:3686 exited_at:{seconds:1776743697 nanos:108445332}" Apr 21 03:54:57.185014 kubelet[2803]: I0421 03:54:57.184695 2803 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 21 03:54:57.330788 systemd[1]: Created slice kubepods-besteffort-podd682c086_ad9c_40b2_928f_12d71adad6b2.slice - libcontainer container kubepods-besteffort-podd682c086_ad9c_40b2_928f_12d71adad6b2.slice. Apr 21 03:54:57.524097 containerd[1583]: time="2026-04-21T03:54:57.523599723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c26j7,Uid:d682c086-ad9c-40b2-928f-12d71adad6b2,Namespace:calico-system,Attempt:0,}" Apr 21 03:54:57.696032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8db344957848745af95c30ca903418d5227e5f0f4376877b640807e49dc1afc4-rootfs.mount: Deactivated successfully. Apr 21 03:54:57.910724 kubelet[2803]: I0421 03:54:57.907771 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l7gx\" (UniqueName: \"kubernetes.io/projected/366a6a55-9568-4706-b261-4d20a468a8f5-kube-api-access-2l7gx\") pod \"calico-kube-controllers-7c599c88cb-rsbkx\" (UID: \"366a6a55-9568-4706-b261-4d20a468a8f5\") " pod="calico-system/calico-kube-controllers-7c599c88cb-rsbkx" Apr 21 03:54:57.910724 kubelet[2803]: I0421 03:54:57.908208 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/366a6a55-9568-4706-b261-4d20a468a8f5-tigera-ca-bundle\") pod \"calico-kube-controllers-7c599c88cb-rsbkx\" (UID: \"366a6a55-9568-4706-b261-4d20a468a8f5\") " pod="calico-system/calico-kube-controllers-7c599c88cb-rsbkx" Apr 21 03:54:58.050059 kubelet[2803]: I0421 03:54:58.049627 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/847e62a6-0187-43f0-ba62-38e8155e121e-calico-apiserver-certs\") pod \"calico-apiserver-6f46db48b5-hn2ql\" (UID: \"847e62a6-0187-43f0-ba62-38e8155e121e\") " pod="calico-system/calico-apiserver-6f46db48b5-hn2ql" Apr 21 03:54:58.052876 kubelet[2803]: I0421 03:54:58.052257 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvspc\" (UniqueName: \"kubernetes.io/projected/847e62a6-0187-43f0-ba62-38e8155e121e-kube-api-access-qvspc\") pod \"calico-apiserver-6f46db48b5-hn2ql\" (UID: \"847e62a6-0187-43f0-ba62-38e8155e121e\") " pod="calico-system/calico-apiserver-6f46db48b5-hn2ql" Apr 21 03:54:58.121219 systemd[1]: Created slice kubepods-besteffort-pod366a6a55_9568_4706_b261_4d20a468a8f5.slice - libcontainer container kubepods-besteffort-pod366a6a55_9568_4706_b261_4d20a468a8f5.slice. Apr 21 03:54:58.228220 kubelet[2803]: I0421 03:54:58.226219 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aff92e04-d561-4ff0-a6d0-5a89cb86b276-config-volume\") pod \"coredns-7d764666f9-cl26f\" (UID: \"aff92e04-d561-4ff0-a6d0-5a89cb86b276\") " pod="kube-system/coredns-7d764666f9-cl26f" Apr 21 03:54:58.228220 kubelet[2803]: I0421 03:54:58.226564 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfxmz\" (UniqueName: \"kubernetes.io/projected/aff92e04-d561-4ff0-a6d0-5a89cb86b276-kube-api-access-mfxmz\") pod \"coredns-7d764666f9-cl26f\" (UID: \"aff92e04-d561-4ff0-a6d0-5a89cb86b276\") " pod="kube-system/coredns-7d764666f9-cl26f" Apr 21 03:54:58.228220 kubelet[2803]: I0421 03:54:58.226610 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvwlx\" (UniqueName: \"kubernetes.io/projected/0fa5cc54-472b-49ea-8130-96d73740c97a-kube-api-access-nvwlx\") pod \"coredns-7d764666f9-btzcx\" (UID: \"0fa5cc54-472b-49ea-8130-96d73740c97a\") " pod="kube-system/coredns-7d764666f9-btzcx" Apr 21 03:54:58.228220 kubelet[2803]: I0421 03:54:58.226635 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fa5cc54-472b-49ea-8130-96d73740c97a-config-volume\") pod \"coredns-7d764666f9-btzcx\" (UID: \"0fa5cc54-472b-49ea-8130-96d73740c97a\") " pod="kube-system/coredns-7d764666f9-btzcx" Apr 21 03:54:58.367893 kubelet[2803]: I0421 03:54:58.356115 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-nginx-config\") pod \"whisker-5f64bb86fb-rd6ss\" (UID: \"f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da\") " pod="calico-system/whisker-5f64bb86fb-rd6ss" Apr 21 03:54:58.389123 kubelet[2803]: I0421 03:54:58.384848 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-whisker-ca-bundle\") pod \"whisker-5f64bb86fb-rd6ss\" (UID: \"f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da\") " pod="calico-system/whisker-5f64bb86fb-rd6ss" Apr 21 03:54:58.537989 kubelet[2803]: I0421 03:54:58.517626 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e5251af5-60b9-44d8-b574-ace9275add08-goldmane-key-pair\") pod \"goldmane-7fb6cdc5d9-gdrxk\" (UID: \"e5251af5-60b9-44d8-b574-ace9275add08\") " pod="calico-system/goldmane-7fb6cdc5d9-gdrxk" Apr 21 03:54:58.537989 kubelet[2803]: I0421 03:54:58.517737 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cmbn\" (UniqueName: \"kubernetes.io/projected/e5251af5-60b9-44d8-b574-ace9275add08-kube-api-access-2cmbn\") pod \"goldmane-7fb6cdc5d9-gdrxk\" (UID: \"e5251af5-60b9-44d8-b574-ace9275add08\") " pod="calico-system/goldmane-7fb6cdc5d9-gdrxk" Apr 21 03:54:58.537989 kubelet[2803]: I0421 03:54:58.517775 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-whisker-backend-key-pair\") pod \"whisker-5f64bb86fb-rd6ss\" (UID: \"f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da\") " pod="calico-system/whisker-5f64bb86fb-rd6ss" Apr 21 03:54:58.537989 kubelet[2803]: I0421 03:54:58.517789 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5251af5-60b9-44d8-b574-ace9275add08-goldmane-ca-bundle\") pod \"goldmane-7fb6cdc5d9-gdrxk\" (UID: \"e5251af5-60b9-44d8-b574-ace9275add08\") " pod="calico-system/goldmane-7fb6cdc5d9-gdrxk" Apr 21 03:54:58.537989 kubelet[2803]: I0421 03:54:58.517844 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh445\" (UniqueName: \"kubernetes.io/projected/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-kube-api-access-vh445\") pod \"whisker-5f64bb86fb-rd6ss\" (UID: \"f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da\") " pod="calico-system/whisker-5f64bb86fb-rd6ss" Apr 21 03:54:58.540026 kubelet[2803]: I0421 03:54:58.517856 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5251af5-60b9-44d8-b574-ace9275add08-config\") pod \"goldmane-7fb6cdc5d9-gdrxk\" (UID: \"e5251af5-60b9-44d8-b574-ace9275add08\") " pod="calico-system/goldmane-7fb6cdc5d9-gdrxk" Apr 21 03:54:58.554317 containerd[1583]: time="2026-04-21T03:54:58.550816918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c599c88cb-rsbkx,Uid:366a6a55-9568-4706-b261-4d20a468a8f5,Namespace:calico-system,Attempt:0,}" Apr 21 03:54:58.677948 kubelet[2803]: I0421 03:54:58.677483 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vwm4\" (UniqueName: \"kubernetes.io/projected/dc0655cd-5d3b-4dbe-afdd-f75c7fa4ac56-kube-api-access-8vwm4\") pod \"calico-apiserver-6f46db48b5-v9v7j\" (UID: \"dc0655cd-5d3b-4dbe-afdd-f75c7fa4ac56\") " pod="calico-system/calico-apiserver-6f46db48b5-v9v7j" Apr 21 03:54:58.689599 kubelet[2803]: I0421 03:54:58.689086 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dc0655cd-5d3b-4dbe-afdd-f75c7fa4ac56-calico-apiserver-certs\") pod \"calico-apiserver-6f46db48b5-v9v7j\" (UID: \"dc0655cd-5d3b-4dbe-afdd-f75c7fa4ac56\") " pod="calico-system/calico-apiserver-6f46db48b5-v9v7j" Apr 21 03:54:59.106830 systemd[1]: Created slice kubepods-besteffort-pod847e62a6_0187_43f0_ba62_38e8155e121e.slice - libcontainer container kubepods-besteffort-pod847e62a6_0187_43f0_ba62_38e8155e121e.slice. Apr 21 03:54:59.533748 containerd[1583]: time="2026-04-21T03:54:59.533526116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f46db48b5-hn2ql,Uid:847e62a6-0187-43f0-ba62-38e8155e121e,Namespace:calico-system,Attempt:0,}" Apr 21 03:54:59.564210 systemd[1]: Created slice kubepods-burstable-pod0fa5cc54_472b_49ea_8130_96d73740c97a.slice - libcontainer container kubepods-burstable-pod0fa5cc54_472b_49ea_8130_96d73740c97a.slice. Apr 21 03:54:59.658391 kubelet[2803]: E0421 03:54:59.657888 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:54:59.723399 containerd[1583]: time="2026-04-21T03:54:59.717555734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-btzcx,Uid:0fa5cc54-472b-49ea-8130-96d73740c97a,Namespace:kube-system,Attempt:0,}" Apr 21 03:54:59.929190 systemd[1]: Created slice kubepods-burstable-podaff92e04_d561_4ff0_a6d0_5a89cb86b276.slice - libcontainer container kubepods-burstable-podaff92e04_d561_4ff0_a6d0_5a89cb86b276.slice. Apr 21 03:55:00.038574 containerd[1583]: time="2026-04-21T03:55:00.038273580Z" level=info msg="CreateContainer within sandbox \"509c08504e83db7cea0aa095ae155a539a101b2c01a74678e20b5068c76c9768\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 03:55:00.051080 systemd[1]: Created slice kubepods-besteffort-podf2b9a2a5_8ee6_462e_aa1b_bf13f188e8da.slice - libcontainer container kubepods-besteffort-podf2b9a2a5_8ee6_462e_aa1b_bf13f188e8da.slice. Apr 21 03:55:00.200393 kubelet[2803]: E0421 03:55:00.196692 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:55:00.209347 containerd[1583]: time="2026-04-21T03:55:00.208037457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cl26f,Uid:aff92e04-d561-4ff0-a6d0-5a89cb86b276,Namespace:kube-system,Attempt:0,}" Apr 21 03:55:00.234491 containerd[1583]: time="2026-04-21T03:55:00.231426197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f64bb86fb-rd6ss,Uid:f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da,Namespace:calico-system,Attempt:0,}" Apr 21 03:55:00.312789 systemd[1]: Created slice kubepods-besteffort-pode5251af5_60b9_44d8_b574_ace9275add08.slice - libcontainer container kubepods-besteffort-pode5251af5_60b9_44d8_b574_ace9275add08.slice. Apr 21 03:55:00.334960 containerd[1583]: time="2026-04-21T03:55:00.334791726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7fb6cdc5d9-gdrxk,Uid:e5251af5-60b9-44d8-b574-ace9275add08,Namespace:calico-system,Attempt:0,}" Apr 21 03:55:00.383626 systemd[1]: Created slice kubepods-besteffort-poddc0655cd_5d3b_4dbe_afdd_f75c7fa4ac56.slice - libcontainer container kubepods-besteffort-poddc0655cd_5d3b_4dbe_afdd_f75c7fa4ac56.slice. Apr 21 03:55:00.554702 containerd[1583]: time="2026-04-21T03:55:00.553765106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f46db48b5-v9v7j,Uid:dc0655cd-5d3b-4dbe-afdd-f75c7fa4ac56,Namespace:calico-system,Attempt:0,}" Apr 21 03:55:00.932351 containerd[1583]: time="2026-04-21T03:55:00.931992382Z" level=info msg="Container 4be9175fe67998cb58ebe999f232954ea5385240a3a62174b368b85831583fb8: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:55:00.974491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2922476049.mount: Deactivated successfully. Apr 21 03:55:01.231986 kubelet[2803]: E0421 03:55:01.230673 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:55:01.274337 containerd[1583]: time="2026-04-21T03:55:01.231926833Z" level=info msg="CreateContainer within sandbox \"509c08504e83db7cea0aa095ae155a539a101b2c01a74678e20b5068c76c9768\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4be9175fe67998cb58ebe999f232954ea5385240a3a62174b368b85831583fb8\"" Apr 21 03:55:01.341976 containerd[1583]: time="2026-04-21T03:55:01.340665574Z" level=error msg="Failed to destroy network for sandbox \"72187c18d1b51a9ff8fc2d685d1840a5a542de2fcc819fc038a5d1ada8485406\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:01.366815 systemd[1]: run-netns-cni\x2de917d0fc\x2d7c4e\x2dbfc3\x2de57e\x2db0a560c9a197.mount: Deactivated successfully. Apr 21 03:55:01.372935 containerd[1583]: time="2026-04-21T03:55:01.367831845Z" level=info msg="StartContainer for \"4be9175fe67998cb58ebe999f232954ea5385240a3a62174b368b85831583fb8\"" Apr 21 03:55:01.381328 containerd[1583]: time="2026-04-21T03:55:01.379975512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c26j7,Uid:d682c086-ad9c-40b2-928f-12d71adad6b2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"72187c18d1b51a9ff8fc2d685d1840a5a542de2fcc819fc038a5d1ada8485406\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:01.496848 kubelet[2803]: E0421 03:55:01.494773 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72187c18d1b51a9ff8fc2d685d1840a5a542de2fcc819fc038a5d1ada8485406\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:01.501417 kubelet[2803]: E0421 03:55:01.499375 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72187c18d1b51a9ff8fc2d685d1840a5a542de2fcc819fc038a5d1ada8485406\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c26j7" Apr 21 03:55:01.501417 kubelet[2803]: E0421 03:55:01.500893 2803 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72187c18d1b51a9ff8fc2d685d1840a5a542de2fcc819fc038a5d1ada8485406\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c26j7" Apr 21 03:55:01.501417 kubelet[2803]: E0421 03:55:01.501222 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c26j7_calico-system(d682c086-ad9c-40b2-928f-12d71adad6b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c26j7_calico-system(d682c086-ad9c-40b2-928f-12d71adad6b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72187c18d1b51a9ff8fc2d685d1840a5a542de2fcc819fc038a5d1ada8485406\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c26j7" podUID="d682c086-ad9c-40b2-928f-12d71adad6b2" Apr 21 03:55:01.642321 containerd[1583]: time="2026-04-21T03:55:01.635710053Z" level=info msg="connecting to shim 4be9175fe67998cb58ebe999f232954ea5385240a3a62174b368b85831583fb8" address="unix:///run/containerd/s/8284e49d8b4a8d4805e2be994ad20702c04187586152b67114c270f96955b725" protocol=ttrpc version=3 Apr 21 03:55:01.983626 containerd[1583]: time="2026-04-21T03:55:01.981025310Z" level=error msg="Failed to destroy network for sandbox \"3e07e6beb7b3660e4b7685bdba9dc917b6d4ca0bf09ba5ae591fc0f94e6105d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:01.992419 containerd[1583]: time="2026-04-21T03:55:01.991840779Z" level=error msg="Failed to destroy network for sandbox \"9ffd5aaf594d7474a6baa1b7f4d89c772075f7b4af85916ff388c66aace26b23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:01.994970 systemd[1]: run-netns-cni\x2df7ceca31\x2df6b4\x2d7f84\x2dafcc\x2d701f384d47f7.mount: Deactivated successfully. Apr 21 03:55:02.048560 systemd[1]: run-netns-cni\x2d2429f880\x2d9acd\x2db1e9\x2d0c7f\x2df67bf97e9ac5.mount: Deactivated successfully. Apr 21 03:55:02.064435 containerd[1583]: time="2026-04-21T03:55:02.050854912Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f64bb86fb-rd6ss,Uid:f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e07e6beb7b3660e4b7685bdba9dc917b6d4ca0bf09ba5ae591fc0f94e6105d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:02.064435 containerd[1583]: time="2026-04-21T03:55:02.051890801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-btzcx,Uid:0fa5cc54-472b-49ea-8130-96d73740c97a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ffd5aaf594d7474a6baa1b7f4d89c772075f7b4af85916ff388c66aace26b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:02.067754 kubelet[2803]: E0421 03:55:02.063222 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ffd5aaf594d7474a6baa1b7f4d89c772075f7b4af85916ff388c66aace26b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:02.079327 kubelet[2803]: E0421 03:55:02.068580 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e07e6beb7b3660e4b7685bdba9dc917b6d4ca0bf09ba5ae591fc0f94e6105d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:02.079327 kubelet[2803]: E0421 03:55:02.068899 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e07e6beb7b3660e4b7685bdba9dc917b6d4ca0bf09ba5ae591fc0f94e6105d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f64bb86fb-rd6ss" Apr 21 03:55:02.079327 kubelet[2803]: E0421 03:55:02.068947 2803 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e07e6beb7b3660e4b7685bdba9dc917b6d4ca0bf09ba5ae591fc0f94e6105d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5f64bb86fb-rd6ss" Apr 21 03:55:02.107846 kubelet[2803]: E0421 03:55:02.069134 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5f64bb86fb-rd6ss_calico-system(f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5f64bb86fb-rd6ss_calico-system(f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e07e6beb7b3660e4b7685bdba9dc917b6d4ca0bf09ba5ae591fc0f94e6105d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5f64bb86fb-rd6ss" podUID="f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da" Apr 21 03:55:02.107846 kubelet[2803]: E0421 03:55:02.072469 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ffd5aaf594d7474a6baa1b7f4d89c772075f7b4af85916ff388c66aace26b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-btzcx" Apr 21 03:55:02.107846 kubelet[2803]: E0421 03:55:02.073227 2803 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ffd5aaf594d7474a6baa1b7f4d89c772075f7b4af85916ff388c66aace26b23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-btzcx" Apr 21 03:55:02.107872 systemd[1]: Started cri-containerd-4be9175fe67998cb58ebe999f232954ea5385240a3a62174b368b85831583fb8.scope - libcontainer container 4be9175fe67998cb58ebe999f232954ea5385240a3a62174b368b85831583fb8. Apr 21 03:55:02.117389 kubelet[2803]: E0421 03:55:02.117027 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-btzcx_kube-system(0fa5cc54-472b-49ea-8130-96d73740c97a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-btzcx_kube-system(0fa5cc54-472b-49ea-8130-96d73740c97a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ffd5aaf594d7474a6baa1b7f4d89c772075f7b4af85916ff388c66aace26b23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-btzcx" podUID="0fa5cc54-472b-49ea-8130-96d73740c97a" Apr 21 03:55:02.135620 containerd[1583]: time="2026-04-21T03:55:02.135124468Z" level=error msg="Failed to destroy network for sandbox \"c2a266bbe15dc352a6235a003d0c424a017618eaa5d5b4b18e22a7fbdf1d666e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:02.152843 systemd[1]: run-netns-cni\x2d6a1a1b45\x2d9733\x2dd1b6\x2d6929\x2d11c71ed7b8af.mount: Deactivated successfully. Apr 21 03:55:02.156881 containerd[1583]: time="2026-04-21T03:55:02.153681120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f46db48b5-hn2ql,Uid:847e62a6-0187-43f0-ba62-38e8155e121e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a266bbe15dc352a6235a003d0c424a017618eaa5d5b4b18e22a7fbdf1d666e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:02.158308 kubelet[2803]: E0421 03:55:02.156241 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a266bbe15dc352a6235a003d0c424a017618eaa5d5b4b18e22a7fbdf1d666e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:02.158308 kubelet[2803]: E0421 03:55:02.156672 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a266bbe15dc352a6235a003d0c424a017618eaa5d5b4b18e22a7fbdf1d666e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f46db48b5-hn2ql" Apr 21 03:55:02.158308 kubelet[2803]: E0421 03:55:02.156831 2803 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a266bbe15dc352a6235a003d0c424a017618eaa5d5b4b18e22a7fbdf1d666e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f46db48b5-hn2ql" Apr 21 03:55:02.159060 kubelet[2803]: E0421 03:55:02.157011 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f46db48b5-hn2ql_calico-system(847e62a6-0187-43f0-ba62-38e8155e121e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f46db48b5-hn2ql_calico-system(847e62a6-0187-43f0-ba62-38e8155e121e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2a266bbe15dc352a6235a003d0c424a017618eaa5d5b4b18e22a7fbdf1d666e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6f46db48b5-hn2ql" podUID="847e62a6-0187-43f0-ba62-38e8155e121e" Apr 21 03:55:02.236969 kubelet[2803]: E0421 03:55:02.235101 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:55:02.253011 containerd[1583]: time="2026-04-21T03:55:02.252700336Z" level=error msg="Failed to destroy network for sandbox \"ca49ed7336e61e543758a11000a9afef202dfec88fea3225057a2d51345c7cf0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:02.262636 systemd[1]: run-netns-cni\x2d56b1d942\x2db7cc\x2dae1f\x2d9f53\x2dd799e10d811b.mount: Deactivated successfully. Apr 21 03:55:02.279075 containerd[1583]: time="2026-04-21T03:55:02.277717992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c599c88cb-rsbkx,Uid:366a6a55-9568-4706-b261-4d20a468a8f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca49ed7336e61e543758a11000a9afef202dfec88fea3225057a2d51345c7cf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:02.290889 kubelet[2803]: E0421 03:55:02.290095 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca49ed7336e61e543758a11000a9afef202dfec88fea3225057a2d51345c7cf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:02.290889 kubelet[2803]: E0421 03:55:02.290811 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca49ed7336e61e543758a11000a9afef202dfec88fea3225057a2d51345c7cf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c599c88cb-rsbkx" Apr 21 03:55:02.292744 kubelet[2803]: E0421 03:55:02.290998 2803 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca49ed7336e61e543758a11000a9afef202dfec88fea3225057a2d51345c7cf0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c599c88cb-rsbkx" Apr 21 03:55:02.298805 kubelet[2803]: E0421 03:55:02.294555 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c599c88cb-rsbkx_calico-system(366a6a55-9568-4706-b261-4d20a468a8f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c599c88cb-rsbkx_calico-system(366a6a55-9568-4706-b261-4d20a468a8f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca49ed7336e61e543758a11000a9afef202dfec88fea3225057a2d51345c7cf0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c599c88cb-rsbkx" podUID="366a6a55-9568-4706-b261-4d20a468a8f5" Apr 21 03:55:02.536180 containerd[1583]: time="2026-04-21T03:55:02.535314889Z" level=error msg="Failed to destroy network for sandbox \"958c5119a55a2e94d1b8b7729ae431c4870bfd9adf9251b0853c698267cfb617\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:02.560396 containerd[1583]: time="2026-04-21T03:55:02.559923648Z" level=error msg="Failed to destroy network for sandbox \"d83f707fa86fc65e4fb9eafe100baf72b1b2583fed23412abe7634e5ac24d852\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:02.567368 containerd[1583]: time="2026-04-21T03:55:02.565393231Z" level=error msg="Failed to destroy network for sandbox \"b01d530174be95684704bdcd48a153723bc983cf0087d352575ceed12ae5f6a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:02.570422 containerd[1583]: time="2026-04-21T03:55:02.567561894Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cl26f,Uid:aff92e04-d561-4ff0-a6d0-5a89cb86b276,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"958c5119a55a2e94d1b8b7729ae431c4870bfd9adf9251b0853c698267cfb617\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:02.571544 kubelet[2803]: E0421 03:55:02.571455 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"958c5119a55a2e94d1b8b7729ae431c4870bfd9adf9251b0853c698267cfb617\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:02.571781 kubelet[2803]: E0421 03:55:02.571587 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"958c5119a55a2e94d1b8b7729ae431c4870bfd9adf9251b0853c698267cfb617\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-cl26f" Apr 21 03:55:02.571781 kubelet[2803]: E0421 03:55:02.571720 2803 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"958c5119a55a2e94d1b8b7729ae431c4870bfd9adf9251b0853c698267cfb617\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-cl26f" Apr 21 03:55:02.572015 kubelet[2803]: E0421 03:55:02.571951 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-cl26f_kube-system(aff92e04-d561-4ff0-a6d0-5a89cb86b276)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-cl26f_kube-system(aff92e04-d561-4ff0-a6d0-5a89cb86b276)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"958c5119a55a2e94d1b8b7729ae431c4870bfd9adf9251b0853c698267cfb617\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-cl26f" podUID="aff92e04-d561-4ff0-a6d0-5a89cb86b276" Apr 21 03:55:02.629444 containerd[1583]: time="2026-04-21T03:55:02.629049879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7fb6cdc5d9-gdrxk,Uid:e5251af5-60b9-44d8-b574-ace9275add08,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d83f707fa86fc65e4fb9eafe100baf72b1b2583fed23412abe7634e5ac24d852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:02.638118 kubelet[2803]: E0421 03:55:02.634665 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d83f707fa86fc65e4fb9eafe100baf72b1b2583fed23412abe7634e5ac24d852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:02.638118 kubelet[2803]: E0421 03:55:02.635812 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d83f707fa86fc65e4fb9eafe100baf72b1b2583fed23412abe7634e5ac24d852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7fb6cdc5d9-gdrxk" Apr 21 03:55:02.638118 kubelet[2803]: E0421 03:55:02.636067 2803 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d83f707fa86fc65e4fb9eafe100baf72b1b2583fed23412abe7634e5ac24d852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7fb6cdc5d9-gdrxk" Apr 21 03:55:02.640295 kubelet[2803]: E0421 03:55:02.636520 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7fb6cdc5d9-gdrxk_calico-system(e5251af5-60b9-44d8-b574-ace9275add08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7fb6cdc5d9-gdrxk_calico-system(e5251af5-60b9-44d8-b574-ace9275add08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d83f707fa86fc65e4fb9eafe100baf72b1b2583fed23412abe7634e5ac24d852\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7fb6cdc5d9-gdrxk" podUID="e5251af5-60b9-44d8-b574-ace9275add08" Apr 21 03:55:02.643430 containerd[1583]: time="2026-04-21T03:55:02.642826534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f46db48b5-v9v7j,Uid:dc0655cd-5d3b-4dbe-afdd-f75c7fa4ac56,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b01d530174be95684704bdcd48a153723bc983cf0087d352575ceed12ae5f6a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:02.651902 kubelet[2803]: E0421 03:55:02.651068 2803 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b01d530174be95684704bdcd48a153723bc983cf0087d352575ceed12ae5f6a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 03:55:02.651902 kubelet[2803]: E0421 03:55:02.651805 2803 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b01d530174be95684704bdcd48a153723bc983cf0087d352575ceed12ae5f6a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f46db48b5-v9v7j" Apr 21 03:55:02.651902 kubelet[2803]: E0421 03:55:02.651978 2803 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b01d530174be95684704bdcd48a153723bc983cf0087d352575ceed12ae5f6a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6f46db48b5-v9v7j" Apr 21 03:55:02.669386 kubelet[2803]: E0421 03:55:02.663536 2803 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6f46db48b5-v9v7j_calico-system(dc0655cd-5d3b-4dbe-afdd-f75c7fa4ac56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6f46db48b5-v9v7j_calico-system(dc0655cd-5d3b-4dbe-afdd-f75c7fa4ac56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b01d530174be95684704bdcd48a153723bc983cf0087d352575ceed12ae5f6a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6f46db48b5-v9v7j" podUID="dc0655cd-5d3b-4dbe-afdd-f75c7fa4ac56" Apr 21 03:55:02.816423 systemd[1]: run-netns-cni\x2d2191f025\x2ddf5e\x2d72e7\x2de066\x2d72609cf03d9e.mount: Deactivated successfully. Apr 21 03:55:02.819988 systemd[1]: run-netns-cni\x2dd3d66dea\x2d74ea\x2d7cd5\x2d8410\x2d278c9f5edec7.mount: Deactivated successfully. Apr 21 03:55:02.821053 systemd[1]: run-netns-cni\x2d97e569ff\x2d5ab6\x2d7a08\x2d2d1d\x2d8b2d4657e235.mount: Deactivated successfully. Apr 21 03:55:03.212100 containerd[1583]: time="2026-04-21T03:55:03.211605838Z" level=info msg="StartContainer for \"4be9175fe67998cb58ebe999f232954ea5385240a3a62174b368b85831583fb8\" returns successfully" Apr 21 03:55:03.219880 kubelet[2803]: E0421 03:55:03.217137 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:55:04.110995 kubelet[2803]: I0421 03:55:04.110250 2803 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-rlhhx" podStartSLOduration=7.115791959 podStartE2EDuration="54.110069044s" podCreationTimestamp="2026-04-21 03:54:10 +0000 UTC" firstStartedPulling="2026-04-21 03:54:12.665740142 +0000 UTC m=+27.894060634" lastFinishedPulling="2026-04-21 03:54:59.660017231 +0000 UTC m=+74.888337719" observedRunningTime="2026-04-21 03:55:04.105607521 +0000 UTC m=+79.333928020" watchObservedRunningTime="2026-04-21 03:55:04.110069044 +0000 UTC m=+79.338389531" Apr 21 03:55:05.587863 kubelet[2803]: I0421 03:55:05.587319 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-whisker-backend-key-pair\") pod \"f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da\" (UID: \"f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da\") " Apr 21 03:55:05.591035 kubelet[2803]: I0421 03:55:05.588050 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-kube-api-access-vh445\" (UniqueName: \"kubernetes.io/projected/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-kube-api-access-vh445\") pod \"f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da\" (UID: \"f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da\") " Apr 21 03:55:05.592328 kubelet[2803]: I0421 03:55:05.591711 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-nginx-config\" (UniqueName: \"kubernetes.io/configmap/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-nginx-config\") pod \"f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da\" (UID: \"f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da\") " Apr 21 03:55:05.592328 kubelet[2803]: I0421 03:55:05.592007 2803 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-whisker-ca-bundle\") pod \"f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da\" (UID: \"f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da\") " Apr 21 03:55:05.593547 kubelet[2803]: I0421 03:55:05.593346 2803 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-nginx-config" pod "f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da" (UID: "f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 03:55:05.594303 kubelet[2803]: I0421 03:55:05.593918 2803 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 21 03:55:05.594303 kubelet[2803]: I0421 03:55:05.594290 2803 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-whisker-ca-bundle" pod "f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da" (UID: "f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 03:55:05.609702 systemd[1]: var-lib-kubelet-pods-f2b9a2a5\x2d8ee6\x2d462e\x2daa1b\x2dbf13f188e8da-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvh445.mount: Deactivated successfully. Apr 21 03:55:05.620287 kubelet[2803]: I0421 03:55:05.612326 2803 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-whisker-backend-key-pair" pod "f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da" (UID: "f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 03:55:05.620287 kubelet[2803]: I0421 03:55:05.615905 2803 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-kube-api-access-vh445" pod "f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da" (UID: "f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da"). InnerVolumeSpecName "kube-api-access-vh445". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 03:55:05.622885 systemd[1]: var-lib-kubelet-pods-f2b9a2a5\x2d8ee6\x2d462e\x2daa1b\x2dbf13f188e8da-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 03:55:05.696627 kubelet[2803]: I0421 03:55:05.696118 2803 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 21 03:55:05.696627 kubelet[2803]: I0421 03:55:05.696517 2803 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 21 03:55:05.696627 kubelet[2803]: I0421 03:55:05.696594 2803 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vh445\" (UniqueName: \"kubernetes.io/projected/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da-kube-api-access-vh445\") on node \"localhost\" DevicePath \"\"" Apr 21 03:55:06.053946 systemd[1]: Removed slice kubepods-besteffort-podf2b9a2a5_8ee6_462e_aa1b_bf13f188e8da.slice - libcontainer container kubepods-besteffort-podf2b9a2a5_8ee6_462e_aa1b_bf13f188e8da.slice. Apr 21 03:55:07.104103 kubelet[2803]: I0421 03:55:07.102287 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2a5f1a7b-bdd3-4c3e-9c70-6b39532fd265-whisker-ca-bundle\") pod \"whisker-79c9d75764-r8nzx\" (UID: \"2a5f1a7b-bdd3-4c3e-9c70-6b39532fd265\") " pod="calico-system/whisker-79c9d75764-r8nzx" Apr 21 03:55:07.104103 kubelet[2803]: I0421 03:55:07.102811 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/2a5f1a7b-bdd3-4c3e-9c70-6b39532fd265-nginx-config\") pod \"whisker-79c9d75764-r8nzx\" (UID: \"2a5f1a7b-bdd3-4c3e-9c70-6b39532fd265\") " pod="calico-system/whisker-79c9d75764-r8nzx" Apr 21 03:55:07.150820 kubelet[2803]: I0421 03:55:07.110100 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24k2g\" (UniqueName: \"kubernetes.io/projected/2a5f1a7b-bdd3-4c3e-9c70-6b39532fd265-kube-api-access-24k2g\") pod \"whisker-79c9d75764-r8nzx\" (UID: \"2a5f1a7b-bdd3-4c3e-9c70-6b39532fd265\") " pod="calico-system/whisker-79c9d75764-r8nzx" Apr 21 03:55:07.152770 kubelet[2803]: I0421 03:55:07.152637 2803 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2a5f1a7b-bdd3-4c3e-9c70-6b39532fd265-whisker-backend-key-pair\") pod \"whisker-79c9d75764-r8nzx\" (UID: \"2a5f1a7b-bdd3-4c3e-9c70-6b39532fd265\") " pod="calico-system/whisker-79c9d75764-r8nzx" Apr 21 03:55:07.161064 systemd[1]: Created slice kubepods-besteffort-pod2a5f1a7b_bdd3_4c3e_9c70_6b39532fd265.slice - libcontainer container kubepods-besteffort-pod2a5f1a7b_bdd3_4c3e_9c70_6b39532fd265.slice. Apr 21 03:55:07.238088 kubelet[2803]: I0421 03:55:07.237761 2803 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da" path="/var/lib/kubelet/pods/f2b9a2a5-8ee6-462e-aa1b-bf13f188e8da/volumes" Apr 21 03:55:07.426718 systemd[1]: Started sshd@7-10.0.0.123:22-10.0.0.1:53196.service - OpenSSH per-connection server daemon (10.0.0.1:53196). Apr 21 03:55:07.951343 containerd[1583]: time="2026-04-21T03:55:07.942708310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79c9d75764-r8nzx,Uid:2a5f1a7b-bdd3-4c3e-9c70-6b39532fd265,Namespace:calico-system,Attempt:0,}" Apr 21 03:55:08.685045 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 53196 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:55:08.690828 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:55:08.939070 systemd-logind[1551]: New session 8 of user core. Apr 21 03:55:09.032868 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 03:55:11.219894 sshd[4111]: Connection closed by 10.0.0.1 port 53196 Apr 21 03:55:11.215970 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Apr 21 03:55:11.300532 systemd[1]: sshd@7-10.0.0.123:22-10.0.0.1:53196.service: Deactivated successfully. Apr 21 03:55:11.346009 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 03:55:11.425741 systemd-logind[1551]: Session 8 logged out. Waiting for processes to exit. Apr 21 03:55:11.443024 systemd-logind[1551]: Removed session 8. Apr 21 03:55:12.315364 containerd[1583]: time="2026-04-21T03:55:12.314949456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c26j7,Uid:d682c086-ad9c-40b2-928f-12d71adad6b2,Namespace:calico-system,Attempt:0,}" Apr 21 03:55:13.032434 systemd-networkd[1475]: cali7bc37c4f8b2: Link UP Apr 21 03:55:13.051838 systemd-networkd[1475]: cali7bc37c4f8b2: Gained carrier Apr 21 03:55:13.341414 containerd[1583]: 2026-04-21 03:55:08.609 [ERROR][4097] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 03:55:13.341414 containerd[1583]: 2026-04-21 03:55:09.407 [INFO][4097] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--79c9d75764--r8nzx-eth0 whisker-79c9d75764- calico-system 2a5f1a7b-bdd3-4c3e-9c70-6b39532fd265 1064 0 2026-04-21 03:55:06 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79c9d75764 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-79c9d75764-r8nzx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7bc37c4f8b2 [] [] }} ContainerID="2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" Namespace="calico-system" Pod="whisker-79c9d75764-r8nzx" WorkloadEndpoint="localhost-k8s-whisker--79c9d75764--r8nzx-" Apr 21 03:55:13.341414 containerd[1583]: 2026-04-21 03:55:09.410 [INFO][4097] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" Namespace="calico-system" Pod="whisker-79c9d75764-r8nzx" WorkloadEndpoint="localhost-k8s-whisker--79c9d75764--r8nzx-eth0" Apr 21 03:55:13.341414 containerd[1583]: 2026-04-21 03:55:10.946 [INFO][4122] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" HandleID="k8s-pod-network.2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" Workload="localhost-k8s-whisker--79c9d75764--r8nzx-eth0" Apr 21 03:55:13.427078 containerd[1583]: 2026-04-21 03:55:11.135 [INFO][4122] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" HandleID="k8s-pod-network.2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" Workload="localhost-k8s-whisker--79c9d75764--r8nzx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000391430), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-79c9d75764-r8nzx", "timestamp":"2026-04-21 03:55:10.94556529 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00040c6e0)} Apr 21 03:55:13.427078 containerd[1583]: 2026-04-21 03:55:11.137 [INFO][4122] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 03:55:13.427078 containerd[1583]: 2026-04-21 03:55:11.139 [INFO][4122] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 03:55:13.427078 containerd[1583]: 2026-04-21 03:55:11.140 [INFO][4122] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 03:55:13.427078 containerd[1583]: 2026-04-21 03:55:11.302 [INFO][4122] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" host="localhost" Apr 21 03:55:13.427078 containerd[1583]: 2026-04-21 03:55:11.497 [INFO][4122] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 03:55:13.427078 containerd[1583]: 2026-04-21 03:55:11.642 [INFO][4122] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 03:55:13.427078 containerd[1583]: 2026-04-21 03:55:11.758 [INFO][4122] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 03:55:13.427078 containerd[1583]: 2026-04-21 03:55:11.837 [INFO][4122] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 03:55:13.427078 containerd[1583]: 2026-04-21 03:55:11.837 [INFO][4122] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" host="localhost" Apr 21 03:55:13.433064 containerd[1583]: 2026-04-21 03:55:11.915 [INFO][4122] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72 Apr 21 03:55:13.433064 containerd[1583]: 2026-04-21 03:55:12.066 [INFO][4122] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" host="localhost" Apr 21 03:55:13.433064 containerd[1583]: 2026-04-21 03:55:12.304 [INFO][4122] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" host="localhost" Apr 21 03:55:13.433064 containerd[1583]: 2026-04-21 03:55:12.305 [INFO][4122] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" host="localhost" Apr 21 03:55:13.433064 containerd[1583]: 2026-04-21 03:55:12.306 [INFO][4122] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 03:55:13.433064 containerd[1583]: 2026-04-21 03:55:12.306 [INFO][4122] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" HandleID="k8s-pod-network.2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" Workload="localhost-k8s-whisker--79c9d75764--r8nzx-eth0" Apr 21 03:55:13.434629 containerd[1583]: 2026-04-21 03:55:12.433 [INFO][4097] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" Namespace="calico-system" Pod="whisker-79c9d75764-r8nzx" WorkloadEndpoint="localhost-k8s-whisker--79c9d75764--r8nzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79c9d75764--r8nzx-eth0", GenerateName:"whisker-79c9d75764-", Namespace:"calico-system", SelfLink:"", UID:"2a5f1a7b-bdd3-4c3e-9c70-6b39532fd265", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 3, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79c9d75764", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-79c9d75764-r8nzx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7bc37c4f8b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 03:55:13.434629 containerd[1583]: 2026-04-21 03:55:12.474 [INFO][4097] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" Namespace="calico-system" Pod="whisker-79c9d75764-r8nzx" WorkloadEndpoint="localhost-k8s-whisker--79c9d75764--r8nzx-eth0" Apr 21 03:55:13.435589 kubelet[2803]: E0421 03:55:13.434295 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:55:13.440928 containerd[1583]: 2026-04-21 03:55:12.500 [INFO][4097] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7bc37c4f8b2 ContainerID="2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" Namespace="calico-system" Pod="whisker-79c9d75764-r8nzx" WorkloadEndpoint="localhost-k8s-whisker--79c9d75764--r8nzx-eth0" Apr 21 03:55:13.440928 containerd[1583]: 2026-04-21 03:55:13.048 [INFO][4097] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" Namespace="calico-system" Pod="whisker-79c9d75764-r8nzx" WorkloadEndpoint="localhost-k8s-whisker--79c9d75764--r8nzx-eth0" Apr 21 03:55:13.441382 containerd[1583]: 2026-04-21 03:55:13.052 [INFO][4097] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" Namespace="calico-system" Pod="whisker-79c9d75764-r8nzx" WorkloadEndpoint="localhost-k8s-whisker--79c9d75764--r8nzx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79c9d75764--r8nzx-eth0", GenerateName:"whisker-79c9d75764-", Namespace:"calico-system", SelfLink:"", UID:"2a5f1a7b-bdd3-4c3e-9c70-6b39532fd265", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 3, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79c9d75764", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72", Pod:"whisker-79c9d75764-r8nzx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7bc37c4f8b2", MAC:"f2:e2:1b:ea:02:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 03:55:13.442470 containerd[1583]: 2026-04-21 03:55:13.250 [INFO][4097] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" Namespace="calico-system" Pod="whisker-79c9d75764-r8nzx" WorkloadEndpoint="localhost-k8s-whisker--79c9d75764--r8nzx-eth0" Apr 21 03:55:14.221490 systemd-networkd[1475]: cali7bc37c4f8b2: Gained IPv6LL Apr 21 03:55:14.369036 kubelet[2803]: E0421 03:55:14.368684 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:55:14.498138 containerd[1583]: time="2026-04-21T03:55:14.416100386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f46db48b5-hn2ql,Uid:847e62a6-0187-43f0-ba62-38e8155e121e,Namespace:calico-system,Attempt:0,}" Apr 21 03:55:14.558968 containerd[1583]: time="2026-04-21T03:55:14.558687447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-btzcx,Uid:0fa5cc54-472b-49ea-8130-96d73740c97a,Namespace:kube-system,Attempt:0,}" Apr 21 03:55:14.576664 containerd[1583]: time="2026-04-21T03:55:14.576317037Z" level=info msg="connecting to shim 2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" address="unix:///run/containerd/s/aaa1c6525e9544c249c2779721d8c0ca584c708454e8c8b59dcc735aa8fc1db6" namespace=k8s.io protocol=ttrpc version=3 Apr 21 03:55:15.393491 containerd[1583]: time="2026-04-21T03:55:15.390472356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7fb6cdc5d9-gdrxk,Uid:e5251af5-60b9-44d8-b574-ace9275add08,Namespace:calico-system,Attempt:0,}" Apr 21 03:55:16.337644 systemd[1]: Started sshd@8-10.0.0.123:22-10.0.0.1:58842.service - OpenSSH per-connection server daemon (10.0.0.1:58842). Apr 21 03:55:17.542690 kubelet[2803]: E0421 03:55:17.541069 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:55:17.737209 containerd[1583]: time="2026-04-21T03:55:17.737053172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cl26f,Uid:aff92e04-d561-4ff0-a6d0-5a89cb86b276,Namespace:kube-system,Attempt:0,}" Apr 21 03:55:17.745303 containerd[1583]: time="2026-04-21T03:55:17.745023885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c599c88cb-rsbkx,Uid:366a6a55-9568-4706-b261-4d20a468a8f5,Namespace:calico-system,Attempt:0,}" Apr 21 03:55:18.205227 sshd[4340]: Accepted publickey for core from 10.0.0.1 port 58842 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:55:18.301673 sshd-session[4340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:55:18.578369 systemd-logind[1551]: New session 9 of user core. Apr 21 03:55:18.621887 containerd[1583]: time="2026-04-21T03:55:18.593459678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f46db48b5-v9v7j,Uid:dc0655cd-5d3b-4dbe-afdd-f75c7fa4ac56,Namespace:calico-system,Attempt:0,}" Apr 21 03:55:18.621177 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 03:55:19.360079 systemd[1]: Started cri-containerd-2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72.scope - libcontainer container 2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72. Apr 21 03:55:21.620039 sshd[4390]: Connection closed by 10.0.0.1 port 58842 Apr 21 03:55:21.623390 sshd-session[4340]: pam_unix(sshd:session): session closed for user core Apr 21 03:55:21.905555 systemd[1]: sshd@8-10.0.0.123:22-10.0.0.1:58842.service: Deactivated successfully. Apr 21 03:55:22.004020 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 03:55:22.049754 systemd-logind[1551]: Session 9 logged out. Waiting for processes to exit. Apr 21 03:55:22.126279 containerd[1583]: time="2026-04-21T03:55:22.050257875Z" level=error msg="get state for 2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72" error="context deadline exceeded" Apr 21 03:55:22.126279 containerd[1583]: time="2026-04-21T03:55:22.050506603Z" level=warning msg="unknown status" status=0 Apr 21 03:55:22.176823 systemd-logind[1551]: Removed session 9. Apr 21 03:55:22.416586 systemd-resolved[1478]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 03:55:23.662988 containerd[1583]: time="2026-04-21T03:55:23.662661537Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 21 03:55:24.274518 systemd-networkd[1475]: cali831a57b7987: Link UP Apr 21 03:55:24.505057 systemd-networkd[1475]: cali831a57b7987: Gained carrier Apr 21 03:55:25.869219 systemd-networkd[1475]: cali831a57b7987: Gained IPv6LL Apr 21 03:55:26.082778 containerd[1583]: 2026-04-21 03:55:13.192 [ERROR][4153] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 03:55:26.082778 containerd[1583]: 2026-04-21 03:55:13.542 [INFO][4153] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--c26j7-eth0 csi-node-driver- calico-system d682c086-ad9c-40b2-928f-12d71adad6b2 794 0 2026-04-21 03:54:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6986d7597d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-c26j7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali831a57b7987 [] [] }} ContainerID="94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" Namespace="calico-system" Pod="csi-node-driver-c26j7" WorkloadEndpoint="localhost-k8s-csi--node--driver--c26j7-" Apr 21 03:55:26.082778 containerd[1583]: 2026-04-21 03:55:13.542 [INFO][4153] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" Namespace="calico-system" Pod="csi-node-driver-c26j7" WorkloadEndpoint="localhost-k8s-csi--node--driver--c26j7-eth0" Apr 21 03:55:26.082778 containerd[1583]: 2026-04-21 03:55:17.415 [INFO][4220] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" HandleID="k8s-pod-network.94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" Workload="localhost-k8s-csi--node--driver--c26j7-eth0" Apr 21 03:55:26.099092 containerd[1583]: 2026-04-21 03:55:18.224 [INFO][4220] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" HandleID="k8s-pod-network.94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" Workload="localhost-k8s-csi--node--driver--c26j7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c9e70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-c26j7", "timestamp":"2026-04-21 03:55:17.414631499 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000562000)} Apr 21 03:55:26.099092 containerd[1583]: 2026-04-21 03:55:18.224 [INFO][4220] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 03:55:26.099092 containerd[1583]: 2026-04-21 03:55:18.224 [INFO][4220] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 03:55:26.099092 containerd[1583]: 2026-04-21 03:55:18.224 [INFO][4220] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 03:55:26.099092 containerd[1583]: 2026-04-21 03:55:18.666 [INFO][4220] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" host="localhost" Apr 21 03:55:26.099092 containerd[1583]: 2026-04-21 03:55:20.805 [INFO][4220] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 03:55:26.099092 containerd[1583]: 2026-04-21 03:55:21.904 [INFO][4220] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 03:55:26.099092 containerd[1583]: 2026-04-21 03:55:22.303 [INFO][4220] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 03:55:26.099092 containerd[1583]: 2026-04-21 03:55:22.693 [INFO][4220] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 03:55:26.099092 containerd[1583]: 2026-04-21 03:55:22.730 [INFO][4220] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" host="localhost" Apr 21 03:55:26.158191 containerd[1583]: 2026-04-21 03:55:23.060 [INFO][4220] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a Apr 21 03:55:26.158191 containerd[1583]: 2026-04-21 03:55:23.333 [INFO][4220] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" host="localhost" Apr 21 03:55:26.158191 containerd[1583]: 2026-04-21 03:55:23.524 [INFO][4220] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" host="localhost" Apr 21 03:55:26.158191 containerd[1583]: 2026-04-21 03:55:23.525 [INFO][4220] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" host="localhost" Apr 21 03:55:26.158191 containerd[1583]: 2026-04-21 03:55:23.526 [INFO][4220] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 03:55:26.158191 containerd[1583]: 2026-04-21 03:55:23.526 [INFO][4220] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" HandleID="k8s-pod-network.94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" Workload="localhost-k8s-csi--node--driver--c26j7-eth0" Apr 21 03:55:26.159971 containerd[1583]: 2026-04-21 03:55:24.031 [INFO][4153] cni-plugin/k8s.go 418: Populated endpoint ContainerID="94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" Namespace="calico-system" Pod="csi-node-driver-c26j7" WorkloadEndpoint="localhost-k8s-csi--node--driver--c26j7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c26j7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d682c086-ad9c-40b2-928f-12d71adad6b2", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 3, 54, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6986d7597d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-c26j7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali831a57b7987", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 03:55:26.160809 containerd[1583]: 2026-04-21 03:55:24.034 [INFO][4153] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" Namespace="calico-system" Pod="csi-node-driver-c26j7" WorkloadEndpoint="localhost-k8s-csi--node--driver--c26j7-eth0" Apr 21 03:55:26.160809 containerd[1583]: 2026-04-21 03:55:24.034 [INFO][4153] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali831a57b7987 ContainerID="94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" Namespace="calico-system" Pod="csi-node-driver-c26j7" WorkloadEndpoint="localhost-k8s-csi--node--driver--c26j7-eth0" Apr 21 03:55:26.160809 containerd[1583]: 2026-04-21 03:55:24.571 [INFO][4153] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" Namespace="calico-system" Pod="csi-node-driver-c26j7" WorkloadEndpoint="localhost-k8s-csi--node--driver--c26j7-eth0" Apr 21 03:55:26.173648 containerd[1583]: 2026-04-21 03:55:24.756 [INFO][4153] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" Namespace="calico-system" Pod="csi-node-driver-c26j7" WorkloadEndpoint="localhost-k8s-csi--node--driver--c26j7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--c26j7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d682c086-ad9c-40b2-928f-12d71adad6b2", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 3, 54, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6986d7597d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a", Pod:"csi-node-driver-c26j7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali831a57b7987", MAC:"d6:05:d2:51:23:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 03:55:26.224039 containerd[1583]: 2026-04-21 03:55:25.803 [INFO][4153] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" Namespace="calico-system" Pod="csi-node-driver-c26j7" WorkloadEndpoint="localhost-k8s-csi--node--driver--c26j7-eth0" Apr 21 03:55:26.224039 containerd[1583]: time="2026-04-21T03:55:26.135240953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79c9d75764-r8nzx,Uid:2a5f1a7b-bdd3-4c3e-9c70-6b39532fd265,Namespace:calico-system,Attempt:0,} returns sandbox id \"2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72\"" Apr 21 03:55:26.541290 containerd[1583]: time="2026-04-21T03:55:26.540929695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\"" Apr 21 03:55:26.811050 systemd[1]: Started sshd@9-10.0.0.123:22-10.0.0.1:60658.service - OpenSSH per-connection server daemon (10.0.0.1:60658). Apr 21 03:55:27.668653 sshd[4519]: Accepted publickey for core from 10.0.0.1 port 60658 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:55:27.675495 sshd-session[4519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:55:27.738205 systemd-logind[1551]: New session 10 of user core. Apr 21 03:55:27.745905 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 03:55:27.761927 containerd[1583]: time="2026-04-21T03:55:27.761751245Z" level=info msg="connecting to shim 94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" address="unix:///run/containerd/s/1cf32a5fc1b354d774abecc1a07157675ae5a626f626b63261fd1cd75fca4c44" namespace=k8s.io protocol=ttrpc version=3 Apr 21 03:55:27.926367 systemd-networkd[1475]: cali8eafe934a27: Link UP Apr 21 03:55:27.970371 systemd-networkd[1475]: cali8eafe934a27: Gained carrier Apr 21 03:55:28.330349 containerd[1583]: 2026-04-21 03:55:17.744 [ERROR][4275] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 03:55:28.330349 containerd[1583]: 2026-04-21 03:55:19.113 [INFO][4275] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--btzcx-eth0 coredns-7d764666f9- kube-system 0fa5cc54-472b-49ea-8130-96d73740c97a 999 0 2026-04-21 03:53:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-btzcx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8eafe934a27 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" Namespace="kube-system" Pod="coredns-7d764666f9-btzcx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--btzcx-" Apr 21 03:55:28.330349 containerd[1583]: 2026-04-21 03:55:19.448 [INFO][4275] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" Namespace="kube-system" Pod="coredns-7d764666f9-btzcx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--btzcx-eth0" Apr 21 03:55:28.330349 containerd[1583]: 2026-04-21 03:55:24.340 [INFO][4432] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" HandleID="k8s-pod-network.2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" Workload="localhost-k8s-coredns--7d764666f9--btzcx-eth0" Apr 21 03:55:28.331735 containerd[1583]: 2026-04-21 03:55:25.152 [INFO][4432] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" HandleID="k8s-pod-network.2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" Workload="localhost-k8s-coredns--7d764666f9--btzcx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003969e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-btzcx", "timestamp":"2026-04-21 03:55:24.340107507 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001994a0)} Apr 21 03:55:28.331735 containerd[1583]: 2026-04-21 03:55:25.152 [INFO][4432] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 03:55:28.331735 containerd[1583]: 2026-04-21 03:55:25.153 [INFO][4432] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 03:55:28.331735 containerd[1583]: 2026-04-21 03:55:25.154 [INFO][4432] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 03:55:28.331735 containerd[1583]: 2026-04-21 03:55:25.504 [INFO][4432] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" host="localhost" Apr 21 03:55:28.331735 containerd[1583]: 2026-04-21 03:55:26.204 [INFO][4432] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 03:55:28.331735 containerd[1583]: 2026-04-21 03:55:26.630 [INFO][4432] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 03:55:28.331735 containerd[1583]: 2026-04-21 03:55:27.207 [INFO][4432] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 03:55:28.331735 containerd[1583]: 2026-04-21 03:55:27.417 [INFO][4432] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 03:55:28.331735 containerd[1583]: 2026-04-21 03:55:27.433 [INFO][4432] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" host="localhost" Apr 21 03:55:28.331997 containerd[1583]: 2026-04-21 03:55:27.521 [INFO][4432] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4 Apr 21 03:55:28.331997 containerd[1583]: 2026-04-21 03:55:27.600 [INFO][4432] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" host="localhost" Apr 21 03:55:28.331997 containerd[1583]: 2026-04-21 03:55:27.682 [INFO][4432] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" host="localhost" Apr 21 03:55:28.331997 containerd[1583]: 2026-04-21 03:55:27.682 [INFO][4432] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" host="localhost" Apr 21 03:55:28.331997 containerd[1583]: 2026-04-21 03:55:27.713 [INFO][4432] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 03:55:28.331997 containerd[1583]: 2026-04-21 03:55:27.714 [INFO][4432] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" HandleID="k8s-pod-network.2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" Workload="localhost-k8s-coredns--7d764666f9--btzcx-eth0" Apr 21 03:55:28.332093 containerd[1583]: 2026-04-21 03:55:27.745 [INFO][4275] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" Namespace="kube-system" Pod="coredns-7d764666f9-btzcx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--btzcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--btzcx-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"0fa5cc54-472b-49ea-8130-96d73740c97a", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 3, 53, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-btzcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8eafe934a27", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 03:55:28.332093 containerd[1583]: 2026-04-21 03:55:27.763 [INFO][4275] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" Namespace="kube-system" Pod="coredns-7d764666f9-btzcx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--btzcx-eth0" Apr 21 03:55:28.332093 containerd[1583]: 2026-04-21 03:55:27.763 [INFO][4275] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8eafe934a27 ContainerID="2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" Namespace="kube-system" Pod="coredns-7d764666f9-btzcx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--btzcx-eth0" Apr 21 03:55:28.332093 containerd[1583]: 2026-04-21 03:55:28.049 [INFO][4275] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" Namespace="kube-system" Pod="coredns-7d764666f9-btzcx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--btzcx-eth0" Apr 21 03:55:28.332093 containerd[1583]: 2026-04-21 03:55:28.054 [INFO][4275] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" Namespace="kube-system" Pod="coredns-7d764666f9-btzcx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--btzcx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--btzcx-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"0fa5cc54-472b-49ea-8130-96d73740c97a", ResourceVersion:"999", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 3, 53, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4", Pod:"coredns-7d764666f9-btzcx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8eafe934a27", MAC:"6e:70:38:89:fb:f4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 03:55:28.332093 containerd[1583]: 2026-04-21 03:55:28.308 [INFO][4275] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" Namespace="kube-system" Pod="coredns-7d764666f9-btzcx" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--btzcx-eth0" Apr 21 03:55:28.546775 systemd[1]: Started cri-containerd-94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a.scope - libcontainer container 94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a. Apr 21 03:55:28.823791 containerd[1583]: time="2026-04-21T03:55:28.820795969Z" level=info msg="connecting to shim 2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4" address="unix:///run/containerd/s/86c2cdb9db793662ff1d38519ed83ffea7ecbd636b6f6a5bc8cd1e4635286215" namespace=k8s.io protocol=ttrpc version=3 Apr 21 03:55:29.070910 sshd[4575]: Connection closed by 10.0.0.1 port 60658 Apr 21 03:55:29.073711 sshd-session[4519]: pam_unix(sshd:session): session closed for user core Apr 21 03:55:29.074484 systemd-networkd[1475]: cali8eafe934a27: Gained IPv6LL Apr 21 03:55:29.265257 systemd[1]: sshd@9-10.0.0.123:22-10.0.0.1:60658.service: Deactivated successfully. Apr 21 03:55:29.336220 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 03:55:29.510016 systemd-logind[1551]: Session 10 logged out. Waiting for processes to exit. Apr 21 03:55:29.518105 systemd-networkd[1475]: cali1494b217a07: Link UP Apr 21 03:55:29.533266 systemd-networkd[1475]: cali1494b217a07: Gained carrier Apr 21 03:55:29.630923 systemd-logind[1551]: Removed session 10. Apr 21 03:55:29.737835 systemd[1]: Started cri-containerd-2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4.scope - libcontainer container 2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4. Apr 21 03:55:30.196687 systemd-resolved[1478]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:18.115 [ERROR][4248] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:20.686 [INFO][4248] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f46db48b5--hn2ql-eth0 calico-apiserver-6f46db48b5- calico-system 847e62a6-0187-43f0-ba62-38e8155e121e 987 0 2026-04-21 03:54:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f46db48b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f46db48b5-hn2ql eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali1494b217a07 [] [] }} ContainerID="daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" Namespace="calico-system" Pod="calico-apiserver-6f46db48b5-hn2ql" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f46db48b5--hn2ql-" Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:20.796 [INFO][4248] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" Namespace="calico-system" Pod="calico-apiserver-6f46db48b5-hn2ql" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f46db48b5--hn2ql-eth0" Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:25.484 [INFO][4449] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" HandleID="k8s-pod-network.daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" Workload="localhost-k8s-calico--apiserver--6f46db48b5--hn2ql-eth0" Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:26.073 [INFO][4449] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" HandleID="k8s-pod-network.daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" Workload="localhost-k8s-calico--apiserver--6f46db48b5--hn2ql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000199840), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6f46db48b5-hn2ql", "timestamp":"2026-04-21 03:55:25.484366334 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00014d8c0)} Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:26.134 [INFO][4449] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:27.682 [INFO][4449] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:27.683 [INFO][4449] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:27.741 [INFO][4449] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" host="localhost" Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:28.096 [INFO][4449] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:28.513 [INFO][4449] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:28.598 [INFO][4449] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:28.687 [INFO][4449] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:28.687 [INFO][4449] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" host="localhost" Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:28.740 [INFO][4449] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:28.884 [INFO][4449] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" host="localhost" Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:28.947 [INFO][4449] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" host="localhost" Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:28.948 [INFO][4449] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" host="localhost" Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:28.950 [INFO][4449] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 03:55:30.226589 containerd[1583]: 2026-04-21 03:55:28.950 [INFO][4449] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" HandleID="k8s-pod-network.daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" Workload="localhost-k8s-calico--apiserver--6f46db48b5--hn2ql-eth0" Apr 21 03:55:30.259958 containerd[1583]: 2026-04-21 03:55:28.979 [INFO][4248] cni-plugin/k8s.go 418: Populated endpoint ContainerID="daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" Namespace="calico-system" Pod="calico-apiserver-6f46db48b5-hn2ql" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f46db48b5--hn2ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f46db48b5--hn2ql-eth0", GenerateName:"calico-apiserver-6f46db48b5-", Namespace:"calico-system", SelfLink:"", UID:"847e62a6-0187-43f0-ba62-38e8155e121e", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 3, 54, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f46db48b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f46db48b5-hn2ql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1494b217a07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 03:55:30.259958 containerd[1583]: 2026-04-21 03:55:29.037 [INFO][4248] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" Namespace="calico-system" Pod="calico-apiserver-6f46db48b5-hn2ql" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f46db48b5--hn2ql-eth0" Apr 21 03:55:30.259958 containerd[1583]: 2026-04-21 03:55:29.038 [INFO][4248] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1494b217a07 ContainerID="daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" Namespace="calico-system" Pod="calico-apiserver-6f46db48b5-hn2ql" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f46db48b5--hn2ql-eth0" Apr 21 03:55:30.259958 containerd[1583]: 2026-04-21 03:55:29.518 [INFO][4248] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" Namespace="calico-system" Pod="calico-apiserver-6f46db48b5-hn2ql" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f46db48b5--hn2ql-eth0" Apr 21 03:55:30.259958 containerd[1583]: 2026-04-21 03:55:29.520 [INFO][4248] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" Namespace="calico-system" Pod="calico-apiserver-6f46db48b5-hn2ql" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f46db48b5--hn2ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f46db48b5--hn2ql-eth0", GenerateName:"calico-apiserver-6f46db48b5-", Namespace:"calico-system", SelfLink:"", UID:"847e62a6-0187-43f0-ba62-38e8155e121e", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 3, 54, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f46db48b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c", Pod:"calico-apiserver-6f46db48b5-hn2ql", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali1494b217a07", MAC:"ba:6b:a2:8b:10:cf", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 03:55:30.259958 containerd[1583]: 2026-04-21 03:55:29.833 [INFO][4248] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" Namespace="calico-system" Pod="calico-apiserver-6f46db48b5-hn2ql" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f46db48b5--hn2ql-eth0" Apr 21 03:55:30.564437 containerd[1583]: time="2026-04-21T03:55:30.558292922Z" level=error msg="get state for 94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a" error="context deadline exceeded" Apr 21 03:55:30.564437 containerd[1583]: time="2026-04-21T03:55:30.558435413Z" level=warning msg="unknown status" status=0 Apr 21 03:55:30.846746 systemd-networkd[1475]: cali1494b217a07: Gained IPv6LL Apr 21 03:55:31.624902 systemd-resolved[1478]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 03:55:31.637301 containerd[1583]: time="2026-04-21T03:55:31.632628609Z" level=info msg="connecting to shim daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" address="unix:///run/containerd/s/90040d8905dad2713d41f165260d2a1fb0ab91667d0aed9f571b485191c6c589" namespace=k8s.io protocol=ttrpc version=3 Apr 21 03:55:31.837235 containerd[1583]: time="2026-04-21T03:55:31.836783611Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 21 03:55:32.748488 systemd-networkd[1475]: cali1e47cd4943f: Link UP Apr 21 03:55:32.771665 systemd-networkd[1475]: cali1e47cd4943f: Gained carrier Apr 21 03:55:34.000546 containerd[1583]: time="2026-04-21T03:55:34.000079667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c26j7,Uid:d682c086-ad9c-40b2-928f-12d71adad6b2,Namespace:calico-system,Attempt:0,} returns sandbox id \"94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a\"" Apr 21 03:55:34.115238 systemd[1]: Started sshd@10-10.0.0.123:22-10.0.0.1:60660.service - OpenSSH per-connection server daemon (10.0.0.1:60660). Apr 21 03:55:34.171879 systemd-networkd[1475]: cali1e47cd4943f: Gained IPv6LL Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:21.542 [ERROR][4372] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:22.429 [INFO][4372] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7c599c88cb--rsbkx-eth0 calico-kube-controllers-7c599c88cb- calico-system 366a6a55-9568-4706-b261-4d20a468a8f5 983 0 2026-04-21 03:54:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c599c88cb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7c599c88cb-rsbkx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1e47cd4943f [] [] }} ContainerID="70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" Namespace="calico-system" Pod="calico-kube-controllers-7c599c88cb-rsbkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c599c88cb--rsbkx-" Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:22.430 [INFO][4372] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" Namespace="calico-system" Pod="calico-kube-controllers-7c599c88cb-rsbkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c599c88cb--rsbkx-eth0" Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:26.442 [INFO][4469] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" HandleID="k8s-pod-network.70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" Workload="localhost-k8s-calico--kube--controllers--7c599c88cb--rsbkx-eth0" Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:27.202 [INFO][4469] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" HandleID="k8s-pod-network.70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" Workload="localhost-k8s-calico--kube--controllers--7c599c88cb--rsbkx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050b10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7c599c88cb-rsbkx", "timestamp":"2026-04-21 03:55:26.442744491 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000324000)} Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:27.203 [INFO][4469] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:28.948 [INFO][4469] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:28.948 [INFO][4469] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:29.062 [INFO][4469] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" host="localhost" Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:30.000 [INFO][4469] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:30.480 [INFO][4469] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:30.730 [INFO][4469] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:31.036 [INFO][4469] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:31.036 [INFO][4469] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" host="localhost" Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:31.333 [INFO][4469] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835 Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:31.547 [INFO][4469] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" host="localhost" Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:32.076 [INFO][4469] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" host="localhost" Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:32.077 [INFO][4469] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" host="localhost" Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:32.105 [INFO][4469] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 03:55:34.336370 containerd[1583]: 2026-04-21 03:55:32.157 [INFO][4469] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" HandleID="k8s-pod-network.70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" Workload="localhost-k8s-calico--kube--controllers--7c599c88cb--rsbkx-eth0" Apr 21 03:55:34.417045 containerd[1583]: 2026-04-21 03:55:32.436 [INFO][4372] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" Namespace="calico-system" Pod="calico-kube-controllers-7c599c88cb-rsbkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c599c88cb--rsbkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c599c88cb--rsbkx-eth0", GenerateName:"calico-kube-controllers-7c599c88cb-", Namespace:"calico-system", SelfLink:"", UID:"366a6a55-9568-4706-b261-4d20a468a8f5", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 3, 54, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c599c88cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7c599c88cb-rsbkx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1e47cd4943f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 03:55:34.417045 containerd[1583]: 2026-04-21 03:55:32.443 [INFO][4372] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" Namespace="calico-system" Pod="calico-kube-controllers-7c599c88cb-rsbkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c599c88cb--rsbkx-eth0" Apr 21 03:55:34.417045 containerd[1583]: 2026-04-21 03:55:32.450 [INFO][4372] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e47cd4943f ContainerID="70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" Namespace="calico-system" Pod="calico-kube-controllers-7c599c88cb-rsbkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c599c88cb--rsbkx-eth0" Apr 21 03:55:34.417045 containerd[1583]: 2026-04-21 03:55:33.015 [INFO][4372] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" Namespace="calico-system" Pod="calico-kube-controllers-7c599c88cb-rsbkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c599c88cb--rsbkx-eth0" Apr 21 03:55:34.417045 containerd[1583]: 2026-04-21 03:55:33.714 [INFO][4372] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" Namespace="calico-system" Pod="calico-kube-controllers-7c599c88cb-rsbkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c599c88cb--rsbkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c599c88cb--rsbkx-eth0", GenerateName:"calico-kube-controllers-7c599c88cb-", Namespace:"calico-system", SelfLink:"", UID:"366a6a55-9568-4706-b261-4d20a468a8f5", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 3, 54, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c599c88cb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835", Pod:"calico-kube-controllers-7c599c88cb-rsbkx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1e47cd4943f", MAC:"8a:29:49:2d:8f:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 03:55:34.417045 containerd[1583]: 2026-04-21 03:55:34.099 [INFO][4372] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" Namespace="calico-system" Pod="calico-kube-controllers-7c599c88cb-rsbkx" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c599c88cb--rsbkx-eth0" Apr 21 03:55:34.543309 containerd[1583]: time="2026-04-21T03:55:34.541282707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-btzcx,Uid:0fa5cc54-472b-49ea-8130-96d73740c97a,Namespace:kube-system,Attempt:0,} returns sandbox id \"2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4\"" Apr 21 03:55:34.693458 kubelet[2803]: E0421 03:55:34.593309 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:55:34.845103 systemd[1]: Started cri-containerd-daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c.scope - libcontainer container daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c. Apr 21 03:55:35.302737 systemd-networkd[1475]: cali96e77a08508: Link UP Apr 21 03:55:35.326741 systemd-networkd[1475]: cali96e77a08508: Gained carrier Apr 21 03:55:35.520472 containerd[1583]: time="2026-04-21T03:55:35.520054238Z" level=info msg="CreateContainer within sandbox \"2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 03:55:35.808958 sshd[4728]: Accepted publickey for core from 10.0.0.1 port 60660 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:55:35.801028 sshd-session[4728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:55:36.216688 systemd-logind[1551]: New session 11 of user core. Apr 21 03:55:36.222108 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:23.049 [ERROR][4384] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:25.522 [INFO][4384] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--cl26f-eth0 coredns-7d764666f9- kube-system aff92e04-d561-4ff0-a6d0-5a89cb86b276 997 0 2026-04-21 03:53:49 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-cl26f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali96e77a08508 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" Namespace="kube-system" Pod="coredns-7d764666f9-cl26f" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cl26f-" Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:25.559 [INFO][4384] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" Namespace="kube-system" Pod="coredns-7d764666f9-cl26f" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cl26f-eth0" Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:27.603 [INFO][4511] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" HandleID="k8s-pod-network.32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" Workload="localhost-k8s-coredns--7d764666f9--cl26f-eth0" Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:27.675 [INFO][4511] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" HandleID="k8s-pod-network.32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" Workload="localhost-k8s-coredns--7d764666f9--cl26f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001173a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-cl26f", "timestamp":"2026-04-21 03:55:27.603530761 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00018f1e0)} Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:27.678 [INFO][4511] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:32.169 [INFO][4511] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:32.171 [INFO][4511] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:32.532 [INFO][4511] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" host="localhost" Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:33.340 [INFO][4511] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:33.661 [INFO][4511] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:33.995 [INFO][4511] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:34.113 [INFO][4511] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:34.148 [INFO][4511] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" host="localhost" Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:34.414 [INFO][4511] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:34.626 [INFO][4511] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" host="localhost" Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:34.857 [INFO][4511] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" host="localhost" Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:34.972 [INFO][4511] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" host="localhost" Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:34.974 [INFO][4511] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 03:55:36.753041 containerd[1583]: 2026-04-21 03:55:34.975 [INFO][4511] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" HandleID="k8s-pod-network.32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" Workload="localhost-k8s-coredns--7d764666f9--cl26f-eth0" Apr 21 03:55:36.764183 containerd[1583]: 2026-04-21 03:55:35.122 [INFO][4384] cni-plugin/k8s.go 418: Populated endpoint ContainerID="32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" Namespace="kube-system" Pod="coredns-7d764666f9-cl26f" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cl26f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--cl26f-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"aff92e04-d561-4ff0-a6d0-5a89cb86b276", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 3, 53, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-cl26f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96e77a08508", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 03:55:36.764183 containerd[1583]: 2026-04-21 03:55:35.273 [INFO][4384] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" Namespace="kube-system" Pod="coredns-7d764666f9-cl26f" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cl26f-eth0" Apr 21 03:55:36.764183 containerd[1583]: 2026-04-21 03:55:35.274 [INFO][4384] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96e77a08508 ContainerID="32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" Namespace="kube-system" Pod="coredns-7d764666f9-cl26f" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cl26f-eth0" Apr 21 03:55:36.764183 containerd[1583]: 2026-04-21 03:55:35.487 [INFO][4384] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" Namespace="kube-system" Pod="coredns-7d764666f9-cl26f" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cl26f-eth0" Apr 21 03:55:36.764183 containerd[1583]: 2026-04-21 03:55:35.810 [INFO][4384] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" Namespace="kube-system" Pod="coredns-7d764666f9-cl26f" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cl26f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--cl26f-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"aff92e04-d561-4ff0-a6d0-5a89cb86b276", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 3, 53, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b", Pod:"coredns-7d764666f9-cl26f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96e77a08508", MAC:"ca:52:54:cd:7c:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 03:55:36.764183 containerd[1583]: 2026-04-21 03:55:36.539 [INFO][4384] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" Namespace="kube-system" Pod="coredns-7d764666f9-cl26f" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--cl26f-eth0" Apr 21 03:55:37.049991 systemd-networkd[1475]: cali96e77a08508: Gained IPv6LL Apr 21 03:55:37.199653 containerd[1583]: time="2026-04-21T03:55:37.086049864Z" level=info msg="connecting to shim 70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835" address="unix:///run/containerd/s/176dbb12faff6beed31a5b895cbed77c71968531014da723a42022d73c44fbdb" namespace=k8s.io protocol=ttrpc version=3 Apr 21 03:55:37.273355 containerd[1583]: time="2026-04-21T03:55:37.272507978Z" level=error msg="get state for daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c" error="context deadline exceeded" Apr 21 03:55:37.878406 containerd[1583]: time="2026-04-21T03:55:37.877414633Z" level=warning msg="unknown status" status=0 Apr 21 03:55:38.487948 systemd-resolved[1478]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 03:55:38.501928 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210313012.mount: Deactivated successfully. Apr 21 03:55:38.668561 containerd[1583]: time="2026-04-21T03:55:38.666906480Z" level=info msg="Container 0202d8de5d4d71cf98c79905ecd4ad636584846e42c14076801b33ea491d5a2e: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:55:38.688364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount515905262.mount: Deactivated successfully. Apr 21 03:55:38.878972 sshd[4770]: Connection closed by 10.0.0.1 port 60660 Apr 21 03:55:38.865289 sshd-session[4728]: pam_unix(sshd:session): session closed for user core Apr 21 03:55:39.015488 systemd[1]: sshd@10-10.0.0.123:22-10.0.0.1:60660.service: Deactivated successfully. Apr 21 03:55:39.060116 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 03:55:39.063724 systemd-networkd[1475]: vxlan.calico: Link UP Apr 21 03:55:39.066039 systemd-networkd[1475]: vxlan.calico: Gained carrier Apr 21 03:55:39.068578 systemd-logind[1551]: Session 11 logged out. Waiting for processes to exit. Apr 21 03:55:39.072442 systemd-logind[1551]: Removed session 11. Apr 21 03:55:39.193953 systemd-networkd[1475]: cali7f080a8c376: Link UP Apr 21 03:55:39.274577 systemd-networkd[1475]: cali7f080a8c376: Gained carrier Apr 21 03:55:39.573060 containerd[1583]: time="2026-04-21T03:55:39.571139464Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 21 03:55:39.655366 containerd[1583]: time="2026-04-21T03:55:39.655186146Z" level=info msg="CreateContainer within sandbox \"2432755c48214d16a9b3a81e3f60109f73dfb82b46671df427b452b1903c75b4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0202d8de5d4d71cf98c79905ecd4ad636584846e42c14076801b33ea491d5a2e\"" Apr 21 03:55:39.694661 containerd[1583]: time="2026-04-21T03:55:39.694602495Z" level=info msg="StartContainer for \"0202d8de5d4d71cf98c79905ecd4ad636584846e42c14076801b33ea491d5a2e\"" Apr 21 03:55:39.712407 containerd[1583]: time="2026-04-21T03:55:39.712281791Z" level=info msg="connecting to shim 0202d8de5d4d71cf98c79905ecd4ad636584846e42c14076801b33ea491d5a2e" address="unix:///run/containerd/s/86c2cdb9db793662ff1d38519ed83ffea7ecbd636b6f6a5bc8cd1e4635286215" protocol=ttrpc version=3 Apr 21 03:55:39.725669 systemd[1]: Started cri-containerd-70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835.scope - libcontainer container 70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835. Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:18.439 [ERROR][4311] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:21.888 [INFO][4311] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--7fb6cdc5d9--gdrxk-eth0 goldmane-7fb6cdc5d9- calico-system e5251af5-60b9-44d8-b574-ace9275add08 993 0 2026-04-21 03:54:08 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7fb6cdc5d9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-7fb6cdc5d9-gdrxk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7f080a8c376 [] [] }} ContainerID="789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-gdrxk" WorkloadEndpoint="localhost-k8s-goldmane--7fb6cdc5d9--gdrxk-" Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:21.889 [INFO][4311] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-gdrxk" WorkloadEndpoint="localhost-k8s-goldmane--7fb6cdc5d9--gdrxk-eth0" Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:27.593 [INFO][4465] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" HandleID="k8s-pod-network.789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" Workload="localhost-k8s-goldmane--7fb6cdc5d9--gdrxk-eth0" Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:27.729 [INFO][4465] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" HandleID="k8s-pod-network.789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" Workload="localhost-k8s-goldmane--7fb6cdc5d9--gdrxk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004181a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-7fb6cdc5d9-gdrxk", "timestamp":"2026-04-21 03:55:27.593568745 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004689a0)} Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:27.734 [INFO][4465] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:34.992 [INFO][4465] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:34.994 [INFO][4465] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:35.294 [INFO][4465] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" host="localhost" Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:35.874 [INFO][4465] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:36.253 [INFO][4465] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:36.466 [INFO][4465] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:36.605 [INFO][4465] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:36.691 [INFO][4465] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" host="localhost" Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:36.842 [INFO][4465] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:37.241 [INFO][4465] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" host="localhost" Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:37.485 [INFO][4465] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" host="localhost" Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:37.598 [INFO][4465] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" host="localhost" Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:37.605 [INFO][4465] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 03:55:39.785490 containerd[1583]: 2026-04-21 03:55:37.605 [INFO][4465] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" HandleID="k8s-pod-network.789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" Workload="localhost-k8s-goldmane--7fb6cdc5d9--gdrxk-eth0" Apr 21 03:55:39.793263 containerd[1583]: 2026-04-21 03:55:38.346 [INFO][4311] cni-plugin/k8s.go 418: Populated endpoint ContainerID="789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-gdrxk" WorkloadEndpoint="localhost-k8s-goldmane--7fb6cdc5d9--gdrxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7fb6cdc5d9--gdrxk-eth0", GenerateName:"goldmane-7fb6cdc5d9-", Namespace:"calico-system", SelfLink:"", UID:"e5251af5-60b9-44d8-b574-ace9275add08", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 3, 54, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7fb6cdc5d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-7fb6cdc5d9-gdrxk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7f080a8c376", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 03:55:39.793263 containerd[1583]: 2026-04-21 03:55:38.376 [INFO][4311] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-gdrxk" WorkloadEndpoint="localhost-k8s-goldmane--7fb6cdc5d9--gdrxk-eth0" Apr 21 03:55:39.793263 containerd[1583]: 2026-04-21 03:55:38.511 [INFO][4311] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7f080a8c376 ContainerID="789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-gdrxk" WorkloadEndpoint="localhost-k8s-goldmane--7fb6cdc5d9--gdrxk-eth0" Apr 21 03:55:39.793263 containerd[1583]: 2026-04-21 03:55:39.276 [INFO][4311] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-gdrxk" WorkloadEndpoint="localhost-k8s-goldmane--7fb6cdc5d9--gdrxk-eth0" Apr 21 03:55:39.793263 containerd[1583]: 2026-04-21 03:55:39.307 [INFO][4311] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-gdrxk" WorkloadEndpoint="localhost-k8s-goldmane--7fb6cdc5d9--gdrxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--7fb6cdc5d9--gdrxk-eth0", GenerateName:"goldmane-7fb6cdc5d9-", Namespace:"calico-system", SelfLink:"", UID:"e5251af5-60b9-44d8-b574-ace9275add08", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 3, 54, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7fb6cdc5d9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea", Pod:"goldmane-7fb6cdc5d9-gdrxk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7f080a8c376", MAC:"fe:bd:78:67:49:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 03:55:39.793263 containerd[1583]: 2026-04-21 03:55:39.698 [INFO][4311] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" Namespace="calico-system" Pod="goldmane-7fb6cdc5d9-gdrxk" WorkloadEndpoint="localhost-k8s-goldmane--7fb6cdc5d9--gdrxk-eth0" Apr 21 03:55:40.161618 containerd[1583]: time="2026-04-21T03:55:40.142342480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f46db48b5-hn2ql,Uid:847e62a6-0187-43f0-ba62-38e8155e121e,Namespace:calico-system,Attempt:0,} returns sandbox id \"daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c\"" Apr 21 03:55:40.273500 systemd-resolved[1478]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 03:55:40.331776 containerd[1583]: time="2026-04-21T03:55:40.330681825Z" level=info msg="connecting to shim 32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b" address="unix:///run/containerd/s/55473bb976cccc60e09fc1e2bc748d23021a51e82729625634eddea7ca0d1065" namespace=k8s.io protocol=ttrpc version=3 Apr 21 03:55:40.567988 systemd-networkd[1475]: cali92fa5bfd69d: Link UP Apr 21 03:55:40.570476 systemd-networkd[1475]: cali92fa5bfd69d: Gained carrier Apr 21 03:55:40.844006 systemd-networkd[1475]: vxlan.calico: Gained IPv6LL Apr 21 03:55:40.859524 systemd[1]: Started cri-containerd-0202d8de5d4d71cf98c79905ecd4ad636584846e42c14076801b33ea491d5a2e.scope - libcontainer container 0202d8de5d4d71cf98c79905ecd4ad636584846e42c14076801b33ea491d5a2e. Apr 21 03:55:40.954122 systemd-networkd[1475]: cali7f080a8c376: Gained IPv6LL Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:24.596 [ERROR][4412] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:26.772 [INFO][4412] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6f46db48b5--v9v7j-eth0 calico-apiserver-6f46db48b5- calico-system dc0655cd-5d3b-4dbe-afdd-f75c7fa4ac56 994 0 2026-04-21 03:54:08 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6f46db48b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6f46db48b5-v9v7j eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali92fa5bfd69d [] [] }} ContainerID="5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" Namespace="calico-system" Pod="calico-apiserver-6f46db48b5-v9v7j" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f46db48b5--v9v7j-" Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:26.773 [INFO][4412] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" Namespace="calico-system" Pod="calico-apiserver-6f46db48b5-v9v7j" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f46db48b5--v9v7j-eth0" Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:27.767 [INFO][4526] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" HandleID="k8s-pod-network.5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" Workload="localhost-k8s-calico--apiserver--6f46db48b5--v9v7j-eth0" Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:28.062 [INFO][4526] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" HandleID="k8s-pod-network.5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" Workload="localhost-k8s-calico--apiserver--6f46db48b5--v9v7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051560), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-6f46db48b5-v9v7j", "timestamp":"2026-04-21 03:55:27.767441739 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000ec420)} Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:28.062 [INFO][4526] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:37.599 [INFO][4526] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:37.600 [INFO][4526] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:37.960 [INFO][4526] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" host="localhost" Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:38.639 [INFO][4526] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:38.966 [INFO][4526] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:39.188 [INFO][4526] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:39.718 [INFO][4526] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:39.718 [INFO][4526] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" host="localhost" Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:39.757 [INFO][4526] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:39.898 [INFO][4526] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" host="localhost" Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:40.343 [INFO][4526] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" host="localhost" Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:40.351 [INFO][4526] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" host="localhost" Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:40.353 [INFO][4526] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 03:55:41.176306 containerd[1583]: 2026-04-21 03:55:40.366 [INFO][4526] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" HandleID="k8s-pod-network.5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" Workload="localhost-k8s-calico--apiserver--6f46db48b5--v9v7j-eth0" Apr 21 03:55:41.360776 containerd[1583]: 2026-04-21 03:55:40.517 [INFO][4412] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" Namespace="calico-system" Pod="calico-apiserver-6f46db48b5-v9v7j" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f46db48b5--v9v7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f46db48b5--v9v7j-eth0", GenerateName:"calico-apiserver-6f46db48b5-", Namespace:"calico-system", SelfLink:"", UID:"dc0655cd-5d3b-4dbe-afdd-f75c7fa4ac56", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 3, 54, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f46db48b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6f46db48b5-v9v7j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali92fa5bfd69d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 03:55:41.360776 containerd[1583]: 2026-04-21 03:55:40.519 [INFO][4412] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" Namespace="calico-system" Pod="calico-apiserver-6f46db48b5-v9v7j" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f46db48b5--v9v7j-eth0" Apr 21 03:55:41.360776 containerd[1583]: 2026-04-21 03:55:40.526 [INFO][4412] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92fa5bfd69d ContainerID="5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" Namespace="calico-system" Pod="calico-apiserver-6f46db48b5-v9v7j" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f46db48b5--v9v7j-eth0" Apr 21 03:55:41.360776 containerd[1583]: 2026-04-21 03:55:40.567 [INFO][4412] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" Namespace="calico-system" Pod="calico-apiserver-6f46db48b5-v9v7j" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f46db48b5--v9v7j-eth0" Apr 21 03:55:41.360776 containerd[1583]: 2026-04-21 03:55:40.569 [INFO][4412] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" Namespace="calico-system" Pod="calico-apiserver-6f46db48b5-v9v7j" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f46db48b5--v9v7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6f46db48b5--v9v7j-eth0", GenerateName:"calico-apiserver-6f46db48b5-", Namespace:"calico-system", SelfLink:"", UID:"dc0655cd-5d3b-4dbe-afdd-f75c7fa4ac56", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 3, 54, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6f46db48b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d", Pod:"calico-apiserver-6f46db48b5-v9v7j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali92fa5bfd69d", MAC:"c2:65:62:da:48:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 03:55:41.360776 containerd[1583]: 2026-04-21 03:55:40.972 [INFO][4412] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" Namespace="calico-system" Pod="calico-apiserver-6f46db48b5-v9v7j" WorkloadEndpoint="localhost-k8s-calico--apiserver--6f46db48b5--v9v7j-eth0" Apr 21 03:55:41.809004 systemd[1]: Started cri-containerd-32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b.scope - libcontainer container 32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b. Apr 21 03:55:42.132961 containerd[1583]: time="2026-04-21T03:55:42.130666266Z" level=info msg="connecting to shim 789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" address="unix:///run/containerd/s/230f8182eebb41c270e9c443e48930f9ea58b09ead2568cbda66122cfff1856b" namespace=k8s.io protocol=ttrpc version=3 Apr 21 03:55:42.222019 containerd[1583]: time="2026-04-21T03:55:42.220991112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c599c88cb-rsbkx,Uid:366a6a55-9568-4706-b261-4d20a468a8f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835\"" Apr 21 03:55:42.454297 systemd-networkd[1475]: cali92fa5bfd69d: Gained IPv6LL Apr 21 03:55:43.040615 containerd[1583]: time="2026-04-21T03:55:43.040112483Z" level=info msg="connecting to shim 5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d" address="unix:///run/containerd/s/b8d0314f914da36033622b0bb72ab0bd935818ed50797c226b367e2fc139be38" namespace=k8s.io protocol=ttrpc version=3 Apr 21 03:55:43.630831 systemd-resolved[1478]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 03:55:44.035621 systemd[1]: Started sshd@11-10.0.0.123:22-10.0.0.1:50490.service - OpenSSH per-connection server daemon (10.0.0.1:50490). Apr 21 03:55:44.558423 systemd[1]: Started cri-containerd-789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea.scope - libcontainer container 789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea. Apr 21 03:55:44.965524 systemd[1]: Started cri-containerd-5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d.scope - libcontainer container 5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d. Apr 21 03:55:45.328402 kubelet[2803]: E0421 03:55:45.300034 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:55:45.473883 sshd[5039]: Accepted publickey for core from 10.0.0.1 port 50490 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:55:45.555321 sshd-session[5039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:55:45.613266 kubelet[2803]: E0421 03:55:45.612044 2803 cadvisor_stats_provider.go:569] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaff92e04_d561_4ff0_a6d0_5a89cb86b276.slice/cri-containerd-32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b.scope\": RecentStats: unable to find data in memory cache]" Apr 21 03:55:45.673256 containerd[1583]: time="2026-04-21T03:55:45.672618390Z" level=info msg="StartContainer for \"0202d8de5d4d71cf98c79905ecd4ad636584846e42c14076801b33ea491d5a2e\" returns successfully" Apr 21 03:55:45.989069 containerd[1583]: time="2026-04-21T03:55:45.987355191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-cl26f,Uid:aff92e04-d561-4ff0-a6d0-5a89cb86b276,Namespace:kube-system,Attempt:0,} returns sandbox id \"32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b\"" Apr 21 03:55:46.060039 systemd-logind[1551]: New session 12 of user core. Apr 21 03:55:46.175647 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 03:55:46.315388 kubelet[2803]: E0421 03:55:46.307271 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:55:46.800453 systemd-resolved[1478]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 03:55:46.971937 containerd[1583]: time="2026-04-21T03:55:46.966120415Z" level=error msg="get state for 789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea" error="context deadline exceeded" Apr 21 03:55:46.971937 containerd[1583]: time="2026-04-21T03:55:46.966790282Z" level=warning msg="unknown status" status=0 Apr 21 03:55:47.026694 containerd[1583]: time="2026-04-21T03:55:47.016688081Z" level=info msg="CreateContainer within sandbox \"32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 03:55:47.272623 systemd-resolved[1478]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 21 03:55:47.697706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount125439449.mount: Deactivated successfully. Apr 21 03:55:47.823423 containerd[1583]: time="2026-04-21T03:55:47.822734725Z" level=info msg="Container ac9ff96cc99586b6490b2b5f4d08ca3ecd175693536fa8b45866f17774053502: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:55:47.856494 kubelet[2803]: E0421 03:55:47.850736 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:55:48.571968 containerd[1583]: time="2026-04-21T03:55:48.412944226Z" level=error msg="ttrpc: received message on inactive stream" stream=3 Apr 21 03:55:48.721276 sshd[5082]: Connection closed by 10.0.0.1 port 50490 Apr 21 03:55:48.729736 sshd-session[5039]: pam_unix(sshd:session): session closed for user core Apr 21 03:55:48.925122 systemd[1]: sshd@11-10.0.0.123:22-10.0.0.1:50490.service: Deactivated successfully. Apr 21 03:55:49.032723 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 03:55:49.049613 systemd-logind[1551]: Session 12 logged out. Waiting for processes to exit. Apr 21 03:55:49.055533 systemd-logind[1551]: Removed session 12. Apr 21 03:55:49.249648 containerd[1583]: time="2026-04-21T03:55:49.238650474Z" level=info msg="CreateContainer within sandbox \"32e8145cb12741e909c1d636c24f783e9d6010bf8a06dd6adc4cca63628cfd4b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ac9ff96cc99586b6490b2b5f4d08ca3ecd175693536fa8b45866f17774053502\"" Apr 21 03:55:49.267828 kubelet[2803]: I0421 03:55:49.263557 2803 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-btzcx" podStartSLOduration=120.263391218 podStartE2EDuration="2m0.263391218s" podCreationTimestamp="2026-04-21 03:53:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 03:55:48.368879671 +0000 UTC m=+123.597200212" watchObservedRunningTime="2026-04-21 03:55:49.263391218 +0000 UTC m=+124.491711745" Apr 21 03:55:49.469744 containerd[1583]: time="2026-04-21T03:55:49.468559733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6f46db48b5-v9v7j,Uid:dc0655cd-5d3b-4dbe-afdd-f75c7fa4ac56,Namespace:calico-system,Attempt:0,} returns sandbox id \"5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d\"" Apr 21 03:55:49.475377 containerd[1583]: time="2026-04-21T03:55:49.470110361Z" level=info msg="StartContainer for \"ac9ff96cc99586b6490b2b5f4d08ca3ecd175693536fa8b45866f17774053502\"" Apr 21 03:55:49.539458 containerd[1583]: time="2026-04-21T03:55:49.534759514Z" level=info msg="connecting to shim ac9ff96cc99586b6490b2b5f4d08ca3ecd175693536fa8b45866f17774053502" address="unix:///run/containerd/s/55473bb976cccc60e09fc1e2bc748d23021a51e82729625634eddea7ca0d1065" protocol=ttrpc version=3 Apr 21 03:55:49.733482 containerd[1583]: time="2026-04-21T03:55:49.733262576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7fb6cdc5d9-gdrxk,Uid:e5251af5-60b9-44d8-b574-ace9275add08,Namespace:calico-system,Attempt:0,} returns sandbox id \"789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea\"" Apr 21 03:55:49.872504 kubelet[2803]: E0421 03:55:49.863649 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:55:49.966131 containerd[1583]: time="2026-04-21T03:55:49.965183791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:55:49.975551 containerd[1583]: time="2026-04-21T03:55:49.975393524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.5: active requests=0, bytes read=6050387" Apr 21 03:55:50.142651 containerd[1583]: time="2026-04-21T03:55:50.132063497Z" level=info msg="ImageCreate event name:\"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:55:50.348853 containerd[1583]: time="2026-04-21T03:55:50.348039891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:55:50.477224 containerd[1583]: time="2026-04-21T03:55:50.477012096Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.5\" with image id \"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:b143cf26c347546feabb95cec04a2349f5ae297830cc54fdc2578b89d1a3e021\", size \"9011804\" in 23.935784941s" Apr 21 03:55:50.477224 containerd[1583]: time="2026-04-21T03:55:50.477141449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.5\" returns image reference \"sha256:50f42a8b70f740407562ef3a08c005eb77150af95c21140e6080af9e61c8f197\"" Apr 21 03:55:50.606392 containerd[1583]: time="2026-04-21T03:55:50.595621241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\"" Apr 21 03:55:50.619754 systemd[1]: Started cri-containerd-ac9ff96cc99586b6490b2b5f4d08ca3ecd175693536fa8b45866f17774053502.scope - libcontainer container ac9ff96cc99586b6490b2b5f4d08ca3ecd175693536fa8b45866f17774053502. Apr 21 03:55:50.962462 containerd[1583]: time="2026-04-21T03:55:50.889103021Z" level=info msg="CreateContainer within sandbox \"2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 03:55:51.601430 kubelet[2803]: E0421 03:55:51.595134 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:55:51.871426 containerd[1583]: time="2026-04-21T03:55:51.860491984Z" level=info msg="Container a7b5f0a15ad44ab1a2ef17a0bc4fe4a7ecb25a441e18816ac078092b6a721430: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:55:51.869010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4004922540.mount: Deactivated successfully. Apr 21 03:55:52.238803 containerd[1583]: time="2026-04-21T03:55:52.236646646Z" level=info msg="CreateContainer within sandbox \"2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"a7b5f0a15ad44ab1a2ef17a0bc4fe4a7ecb25a441e18816ac078092b6a721430\"" Apr 21 03:55:52.254906 containerd[1583]: time="2026-04-21T03:55:52.252745261Z" level=info msg="StartContainer for \"a7b5f0a15ad44ab1a2ef17a0bc4fe4a7ecb25a441e18816ac078092b6a721430\"" Apr 21 03:55:52.345552 containerd[1583]: time="2026-04-21T03:55:52.338946873Z" level=info msg="connecting to shim a7b5f0a15ad44ab1a2ef17a0bc4fe4a7ecb25a441e18816ac078092b6a721430" address="unix:///run/containerd/s/aaa1c6525e9544c249c2779721d8c0ca584c708454e8c8b59dcc735aa8fc1db6" protocol=ttrpc version=3 Apr 21 03:55:52.377524 containerd[1583]: time="2026-04-21T03:55:52.371811769Z" level=info msg="StartContainer for \"ac9ff96cc99586b6490b2b5f4d08ca3ecd175693536fa8b45866f17774053502\" returns successfully" Apr 21 03:55:52.949586 systemd[1]: Started cri-containerd-a7b5f0a15ad44ab1a2ef17a0bc4fe4a7ecb25a441e18816ac078092b6a721430.scope - libcontainer container a7b5f0a15ad44ab1a2ef17a0bc4fe4a7ecb25a441e18816ac078092b6a721430. Apr 21 03:55:53.061398 kubelet[2803]: E0421 03:55:53.058381 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:55:53.945490 systemd[1]: Started sshd@12-10.0.0.123:22-10.0.0.1:60376.service - OpenSSH per-connection server daemon (10.0.0.1:60376). Apr 21 03:55:54.158410 kubelet[2803]: E0421 03:55:54.156534 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:55:54.306493 containerd[1583]: time="2026-04-21T03:55:54.299442524Z" level=info msg="StartContainer for \"a7b5f0a15ad44ab1a2ef17a0bc4fe4a7ecb25a441e18816ac078092b6a721430\" returns successfully" Apr 21 03:55:54.620375 kubelet[2803]: I0421 03:55:54.616915 2803 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-cl26f" podStartSLOduration=125.616640713 podStartE2EDuration="2m5.616640713s" podCreationTimestamp="2026-04-21 03:53:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 03:55:53.497694622 +0000 UTC m=+128.726015126" watchObservedRunningTime="2026-04-21 03:55:54.616640713 +0000 UTC m=+129.844961202" Apr 21 03:55:54.723986 sshd[5231]: Accepted publickey for core from 10.0.0.1 port 60376 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:55:54.760050 sshd-session[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:55:54.858758 systemd-logind[1551]: New session 13 of user core. Apr 21 03:55:54.905576 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 03:55:55.389257 kubelet[2803]: E0421 03:55:55.388427 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:55:56.061968 sshd[5242]: Connection closed by 10.0.0.1 port 60376 Apr 21 03:55:56.067083 sshd-session[5231]: pam_unix(sshd:session): session closed for user core Apr 21 03:55:56.223500 systemd[1]: sshd@12-10.0.0.123:22-10.0.0.1:60376.service: Deactivated successfully. Apr 21 03:55:56.277908 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 03:55:56.391316 systemd-logind[1551]: Session 13 logged out. Waiting for processes to exit. Apr 21 03:55:56.458687 kubelet[2803]: E0421 03:55:56.458346 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:55:56.476907 systemd-logind[1551]: Removed session 13. Apr 21 03:55:57.354915 containerd[1583]: time="2026-04-21T03:55:57.354349489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:55:57.420651 containerd[1583]: time="2026-04-21T03:55:57.361579336Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.5: active requests=0, bytes read=8535421" Apr 21 03:55:57.426942 containerd[1583]: time="2026-04-21T03:55:57.426540915Z" level=info msg="ImageCreate event name:\"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:55:57.472239 containerd[1583]: time="2026-04-21T03:55:57.471853011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:55:57.506048 containerd[1583]: time="2026-04-21T03:55:57.505074921Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.5\" with image id \"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e8a5b44388a309910946072582b1a1f283c52cf73e9825179235d934447c8b7d\", size \"11496846\" in 6.89242303s" Apr 21 03:55:57.506048 containerd[1583]: time="2026-04-21T03:55:57.505880409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.5\" returns image reference \"sha256:94e17390bb55c802657312c601a05da4abfb9d9311bef8a389a19fd8a5388a96\"" Apr 21 03:55:57.521591 containerd[1583]: time="2026-04-21T03:55:57.520376686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 21 03:55:57.642250 containerd[1583]: time="2026-04-21T03:55:57.638536514Z" level=info msg="CreateContainer within sandbox \"94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 03:55:57.921138 containerd[1583]: time="2026-04-21T03:55:57.881685677Z" level=info msg="Container 88ea09a169ad770c6d5b7d5fb36fb4c90a8fb2ce66b9ced6c1f4b12220c0a84b: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:55:58.055324 containerd[1583]: time="2026-04-21T03:55:58.054686150Z" level=info msg="CreateContainer within sandbox \"94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"88ea09a169ad770c6d5b7d5fb36fb4c90a8fb2ce66b9ced6c1f4b12220c0a84b\"" Apr 21 03:55:58.062042 containerd[1583]: time="2026-04-21T03:55:58.061629038Z" level=info msg="StartContainer for \"88ea09a169ad770c6d5b7d5fb36fb4c90a8fb2ce66b9ced6c1f4b12220c0a84b\"" Apr 21 03:55:58.142478 containerd[1583]: time="2026-04-21T03:55:58.138080868Z" level=info msg="connecting to shim 88ea09a169ad770c6d5b7d5fb36fb4c90a8fb2ce66b9ced6c1f4b12220c0a84b" address="unix:///run/containerd/s/1cf32a5fc1b354d774abecc1a07157675ae5a626f626b63261fd1cd75fca4c44" protocol=ttrpc version=3 Apr 21 03:55:58.553604 systemd[1]: Started cri-containerd-88ea09a169ad770c6d5b7d5fb36fb4c90a8fb2ce66b9ced6c1f4b12220c0a84b.scope - libcontainer container 88ea09a169ad770c6d5b7d5fb36fb4c90a8fb2ce66b9ced6c1f4b12220c0a84b. Apr 21 03:56:00.369839 containerd[1583]: time="2026-04-21T03:56:00.366597700Z" level=info msg="StartContainer for \"88ea09a169ad770c6d5b7d5fb36fb4c90a8fb2ce66b9ced6c1f4b12220c0a84b\" returns successfully" Apr 21 03:56:01.212618 systemd[1]: Started sshd@13-10.0.0.123:22-10.0.0.1:52922.service - OpenSSH per-connection server daemon (10.0.0.1:52922). Apr 21 03:56:02.031800 sshd[5307]: Accepted publickey for core from 10.0.0.1 port 52922 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:56:02.045632 sshd-session[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:56:02.252880 systemd-logind[1551]: New session 14 of user core. Apr 21 03:56:02.297912 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 03:56:03.529344 sshd[5315]: Connection closed by 10.0.0.1 port 52922 Apr 21 03:56:03.538456 sshd-session[5307]: pam_unix(sshd:session): session closed for user core Apr 21 03:56:03.673461 systemd[1]: sshd@13-10.0.0.123:22-10.0.0.1:52922.service: Deactivated successfully. Apr 21 03:56:03.794134 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 03:56:03.826017 systemd-logind[1551]: Session 14 logged out. Waiting for processes to exit. Apr 21 03:56:03.929651 systemd-logind[1551]: Removed session 14. Apr 21 03:56:04.232244 kubelet[2803]: E0421 03:56:04.229551 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:56:08.763260 systemd[1]: Started sshd@14-10.0.0.123:22-10.0.0.1:56846.service - OpenSSH per-connection server daemon (10.0.0.1:56846). Apr 21 03:56:09.442599 sshd[5364]: Accepted publickey for core from 10.0.0.1 port 56846 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:56:09.501501 sshd-session[5364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:56:09.669974 systemd-logind[1551]: New session 15 of user core. Apr 21 03:56:09.762569 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 03:56:10.603944 sshd[5367]: Connection closed by 10.0.0.1 port 56846 Apr 21 03:56:10.605555 sshd-session[5364]: pam_unix(sshd:session): session closed for user core Apr 21 03:56:10.726098 systemd[1]: sshd@14-10.0.0.123:22-10.0.0.1:56846.service: Deactivated successfully. Apr 21 03:56:10.772469 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 03:56:10.787286 systemd-logind[1551]: Session 15 logged out. Waiting for processes to exit. Apr 21 03:56:10.831732 systemd-logind[1551]: Removed session 15. Apr 21 03:56:13.609590 containerd[1583]: time="2026-04-21T03:56:13.609246657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:56:13.620347 containerd[1583]: time="2026-04-21T03:56:13.615518753Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=46175896" Apr 21 03:56:13.629361 containerd[1583]: time="2026-04-21T03:56:13.628971935Z" level=info msg="ImageCreate event name:\"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:56:13.668213 containerd[1583]: time="2026-04-21T03:56:13.667522058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:56:13.749126 containerd[1583]: time="2026-04-21T03:56:13.748081281Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"49137337\" in 16.227277059s" Apr 21 03:56:13.749126 containerd[1583]: time="2026-04-21T03:56:13.748463831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\"" Apr 21 03:56:13.772742 containerd[1583]: time="2026-04-21T03:56:13.771976267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\"" Apr 21 03:56:13.814029 containerd[1583]: time="2026-04-21T03:56:13.813912770Z" level=info msg="CreateContainer within sandbox \"daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 03:56:13.923638 containerd[1583]: time="2026-04-21T03:56:13.921770063Z" level=info msg="Container 2db97294673b8d7a8d870971945e95068f78056ffc06bf72a7814b35ce4ef674: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:56:13.988265 containerd[1583]: time="2026-04-21T03:56:13.987693545Z" level=info msg="CreateContainer within sandbox \"daac1da80059b2a443b5304b7833ec97866802bc84c0d28cd0f10c9bdb140e0c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2db97294673b8d7a8d870971945e95068f78056ffc06bf72a7814b35ce4ef674\"" Apr 21 03:56:14.006958 containerd[1583]: time="2026-04-21T03:56:14.002511966Z" level=info msg="StartContainer for \"2db97294673b8d7a8d870971945e95068f78056ffc06bf72a7814b35ce4ef674\"" Apr 21 03:56:14.028653 containerd[1583]: time="2026-04-21T03:56:14.027640428Z" level=info msg="connecting to shim 2db97294673b8d7a8d870971945e95068f78056ffc06bf72a7814b35ce4ef674" address="unix:///run/containerd/s/90040d8905dad2713d41f165260d2a1fb0ab91667d0aed9f571b485191c6c589" protocol=ttrpc version=3 Apr 21 03:56:14.334787 systemd[1]: Started cri-containerd-2db97294673b8d7a8d870971945e95068f78056ffc06bf72a7814b35ce4ef674.scope - libcontainer container 2db97294673b8d7a8d870971945e95068f78056ffc06bf72a7814b35ce4ef674. Apr 21 03:56:15.219092 containerd[1583]: time="2026-04-21T03:56:15.218762274Z" level=info msg="StartContainer for \"2db97294673b8d7a8d870971945e95068f78056ffc06bf72a7814b35ce4ef674\" returns successfully" Apr 21 03:56:15.690077 systemd[1]: Started sshd@15-10.0.0.123:22-10.0.0.1:56114.service - OpenSSH per-connection server daemon (10.0.0.1:56114). Apr 21 03:56:16.660387 sshd[5424]: Accepted publickey for core from 10.0.0.1 port 56114 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:56:16.666018 kubelet[2803]: I0421 03:56:16.664783 2803 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6f46db48b5-hn2ql" podStartSLOduration=95.2545523 podStartE2EDuration="2m8.664627928s" podCreationTimestamp="2026-04-21 03:54:08 +0000 UTC" firstStartedPulling="2026-04-21 03:55:40.349500857 +0000 UTC m=+115.577821358" lastFinishedPulling="2026-04-21 03:56:13.759576491 +0000 UTC m=+148.987896986" observedRunningTime="2026-04-21 03:56:16.658344437 +0000 UTC m=+151.886664929" watchObservedRunningTime="2026-04-21 03:56:16.664627928 +0000 UTC m=+151.892948416" Apr 21 03:56:16.720794 sshd-session[5424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:56:16.889008 systemd-logind[1551]: New session 16 of user core. Apr 21 03:56:16.899905 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 03:56:17.360141 kubelet[2803]: E0421 03:56:17.359423 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:56:18.057060 sshd[5433]: Connection closed by 10.0.0.1 port 56114 Apr 21 03:56:18.065791 sshd-session[5424]: pam_unix(sshd:session): session closed for user core Apr 21 03:56:18.159098 systemd[1]: sshd@15-10.0.0.123:22-10.0.0.1:56114.service: Deactivated successfully. Apr 21 03:56:18.188387 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 03:56:18.197899 systemd-logind[1551]: Session 16 logged out. Waiting for processes to exit. Apr 21 03:56:18.241475 systemd-logind[1551]: Removed session 16. Apr 21 03:56:22.240788 kubelet[2803]: E0421 03:56:22.240395 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:56:23.136902 systemd[1]: Started sshd@16-10.0.0.123:22-10.0.0.1:56118.service - OpenSSH per-connection server daemon (10.0.0.1:56118). Apr 21 03:56:23.718596 sshd[5455]: Accepted publickey for core from 10.0.0.1 port 56118 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:56:23.754352 sshd-session[5455]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:56:23.884763 systemd-logind[1551]: New session 17 of user core. Apr 21 03:56:23.908724 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 03:56:25.558727 sshd[5458]: Connection closed by 10.0.0.1 port 56118 Apr 21 03:56:25.564849 sshd-session[5455]: pam_unix(sshd:session): session closed for user core Apr 21 03:56:25.609763 systemd-logind[1551]: Session 17 logged out. Waiting for processes to exit. Apr 21 03:56:25.614364 systemd[1]: sshd@16-10.0.0.123:22-10.0.0.1:56118.service: Deactivated successfully. Apr 21 03:56:25.662138 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 03:56:25.708684 systemd-logind[1551]: Removed session 17. Apr 21 03:56:27.524559 containerd[1583]: time="2026-04-21T03:56:27.524121175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.5: active requests=0, bytes read=50078175" Apr 21 03:56:27.581514 containerd[1583]: time="2026-04-21T03:56:27.581213700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:56:27.591397 containerd[1583]: time="2026-04-21T03:56:27.590506845Z" level=info msg="ImageCreate event name:\"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:56:27.602678 containerd[1583]: time="2026-04-21T03:56:27.601736331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:56:27.607383 containerd[1583]: time="2026-04-21T03:56:27.602839180Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" with image id \"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5fa7fb7e707d54479cd5d93cfe42352076b805f36560df457b53701d9e738d72\", size \"53039568\" in 13.830727029s" Apr 21 03:56:27.607383 containerd[1583]: time="2026-04-21T03:56:27.605080865Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.5\" returns image reference \"sha256:d686db0e796dab36cb761ce46b93cabed881d9328bea92a965ad505653a85e37\"" Apr 21 03:56:27.630441 containerd[1583]: time="2026-04-21T03:56:27.624134244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\"" Apr 21 03:56:27.813793 containerd[1583]: time="2026-04-21T03:56:27.808475843Z" level=info msg="CreateContainer within sandbox \"70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 03:56:27.879817 containerd[1583]: time="2026-04-21T03:56:27.878492562Z" level=info msg="Container 33b9cd801fe05ad71acf5f63c1b374c232506327ad9da06fe43f297646bf0205: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:56:27.971531 containerd[1583]: time="2026-04-21T03:56:27.969849798Z" level=info msg="CreateContainer within sandbox \"70efaeeccdecfecbe529a02b56de185674c2ede5aa3e87fd96fab4b83e064835\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"33b9cd801fe05ad71acf5f63c1b374c232506327ad9da06fe43f297646bf0205\"" Apr 21 03:56:28.031487 containerd[1583]: time="2026-04-21T03:56:28.030812576Z" level=info msg="StartContainer for \"33b9cd801fe05ad71acf5f63c1b374c232506327ad9da06fe43f297646bf0205\"" Apr 21 03:56:28.075448 containerd[1583]: time="2026-04-21T03:56:28.068608341Z" level=info msg="connecting to shim 33b9cd801fe05ad71acf5f63c1b374c232506327ad9da06fe43f297646bf0205" address="unix:///run/containerd/s/176dbb12faff6beed31a5b895cbed77c71968531014da723a42022d73c44fbdb" protocol=ttrpc version=3 Apr 21 03:56:28.561197 containerd[1583]: time="2026-04-21T03:56:28.560253600Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:56:28.566766 systemd[1]: Started cri-containerd-33b9cd801fe05ad71acf5f63c1b374c232506327ad9da06fe43f297646bf0205.scope - libcontainer container 33b9cd801fe05ad71acf5f63c1b374c232506327ad9da06fe43f297646bf0205. Apr 21 03:56:28.578871 containerd[1583]: time="2026-04-21T03:56:28.578440653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.5: active requests=0, bytes read=77" Apr 21 03:56:28.603360 containerd[1583]: time="2026-04-21T03:56:28.602603856Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" with image id \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:78a11eeba8e8a02ecd6014bc8260180819ee7005f9eacb364b9595d1e4b166e1\", size \"49137337\" in 978.052462ms" Apr 21 03:56:28.603360 containerd[1583]: time="2026-04-21T03:56:28.602752620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.5\" returns image reference \"sha256:3ba7bd8ea381d6c35b8cc8b5250ae89b7e94ecac0c672dca8a449986e5205cb1\"" Apr 21 03:56:28.622493 containerd[1583]: time="2026-04-21T03:56:28.621909788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\"" Apr 21 03:56:28.677476 containerd[1583]: time="2026-04-21T03:56:28.676660151Z" level=info msg="CreateContainer within sandbox \"5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 03:56:28.940398 containerd[1583]: time="2026-04-21T03:56:28.935109432Z" level=info msg="Container bf420bb3c03b29aedd64e14c3c41e46d2d18d36df9bca8595f2e6aa9d1b96194: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:56:29.035073 update_engine[1556]: I20260421 03:56:29.033504 1556 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 21 03:56:29.066400 update_engine[1556]: I20260421 03:56:29.039834 1556 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 21 03:56:29.139380 update_engine[1556]: I20260421 03:56:29.138844 1556 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 21 03:56:29.145502 update_engine[1556]: I20260421 03:56:29.144917 1556 omaha_request_params.cc:62] Current group set to stable Apr 21 03:56:29.147610 update_engine[1556]: I20260421 03:56:29.146822 1556 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 21 03:56:29.147610 update_engine[1556]: I20260421 03:56:29.147476 1556 update_attempter.cc:643] Scheduling an action processor start. Apr 21 03:56:29.149058 update_engine[1556]: I20260421 03:56:29.147709 1556 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 21 03:56:29.149058 update_engine[1556]: I20260421 03:56:29.148911 1556 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 21 03:56:29.149413 update_engine[1556]: I20260421 03:56:29.149366 1556 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 21 03:56:29.149413 update_engine[1556]: I20260421 03:56:29.149394 1556 omaha_request_action.cc:272] Request: Apr 21 03:56:29.149413 update_engine[1556]: Apr 21 03:56:29.149413 update_engine[1556]: Apr 21 03:56:29.149413 update_engine[1556]: Apr 21 03:56:29.149413 update_engine[1556]: Apr 21 03:56:29.149413 update_engine[1556]: Apr 21 03:56:29.149413 update_engine[1556]: Apr 21 03:56:29.149413 update_engine[1556]: Apr 21 03:56:29.149413 update_engine[1556]: Apr 21 03:56:29.149413 update_engine[1556]: I20260421 03:56:29.149406 1556 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 21 03:56:29.173660 update_engine[1556]: I20260421 03:56:29.173265 1556 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 21 03:56:29.202706 update_engine[1556]: I20260421 03:56:29.190686 1556 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 21 03:56:29.208448 locksmithd[1604]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 21 03:56:29.220114 update_engine[1556]: E20260421 03:56:29.210711 1556 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 21 03:56:29.220114 update_engine[1556]: I20260421 03:56:29.219698 1556 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 21 03:56:29.325467 containerd[1583]: time="2026-04-21T03:56:29.324016556Z" level=info msg="CreateContainer within sandbox \"5cb703a6ed4949c4265ce91ef59b35914ce40688aa484d68bfb30eca87eec30d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bf420bb3c03b29aedd64e14c3c41e46d2d18d36df9bca8595f2e6aa9d1b96194\"" Apr 21 03:56:29.344599 containerd[1583]: time="2026-04-21T03:56:29.340677504Z" level=info msg="StartContainer for \"bf420bb3c03b29aedd64e14c3c41e46d2d18d36df9bca8595f2e6aa9d1b96194\"" Apr 21 03:56:29.361568 containerd[1583]: time="2026-04-21T03:56:29.360851378Z" level=info msg="connecting to shim bf420bb3c03b29aedd64e14c3c41e46d2d18d36df9bca8595f2e6aa9d1b96194" address="unix:///run/containerd/s/b8d0314f914da36033622b0bb72ab0bd935818ed50797c226b367e2fc139be38" protocol=ttrpc version=3 Apr 21 03:56:29.840736 systemd[1]: Started cri-containerd-bf420bb3c03b29aedd64e14c3c41e46d2d18d36df9bca8595f2e6aa9d1b96194.scope - libcontainer container bf420bb3c03b29aedd64e14c3c41e46d2d18d36df9bca8595f2e6aa9d1b96194. Apr 21 03:56:30.031375 containerd[1583]: time="2026-04-21T03:56:30.030260564Z" level=info msg="StartContainer for \"33b9cd801fe05ad71acf5f63c1b374c232506327ad9da06fe43f297646bf0205\" returns successfully" Apr 21 03:56:30.570214 containerd[1583]: time="2026-04-21T03:56:30.569881816Z" level=info msg="StartContainer for \"bf420bb3c03b29aedd64e14c3c41e46d2d18d36df9bca8595f2e6aa9d1b96194\" returns successfully" Apr 21 03:56:30.641648 systemd[1]: Started sshd@17-10.0.0.123:22-10.0.0.1:37840.service - OpenSSH per-connection server daemon (10.0.0.1:37840). Apr 21 03:56:31.404907 sshd[5546]: Accepted publickey for core from 10.0.0.1 port 37840 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:56:31.464478 sshd-session[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:56:31.667415 systemd-logind[1551]: New session 18 of user core. Apr 21 03:56:31.720591 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 03:56:31.835626 kubelet[2803]: I0421 03:56:31.834945 2803 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-6f46db48b5-v9v7j" podStartSLOduration=105.077204315 podStartE2EDuration="2m23.834624822s" podCreationTimestamp="2026-04-21 03:54:08 +0000 UTC" firstStartedPulling="2026-04-21 03:55:49.857711535 +0000 UTC m=+125.086032041" lastFinishedPulling="2026-04-21 03:56:28.615132043 +0000 UTC m=+163.843452548" observedRunningTime="2026-04-21 03:56:31.833657152 +0000 UTC m=+167.061977647" watchObservedRunningTime="2026-04-21 03:56:31.834624822 +0000 UTC m=+167.062945312" Apr 21 03:56:31.850478 kubelet[2803]: I0421 03:56:31.836707 2803 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7c599c88cb-rsbkx" podStartSLOduration=95.878204879 podStartE2EDuration="2m20.836621305s" podCreationTimestamp="2026-04-21 03:54:11 +0000 UTC" firstStartedPulling="2026-04-21 03:55:42.652634825 +0000 UTC m=+117.880955322" lastFinishedPulling="2026-04-21 03:56:27.611051246 +0000 UTC m=+162.839371748" observedRunningTime="2026-04-21 03:56:31.390146383 +0000 UTC m=+166.618466895" watchObservedRunningTime="2026-04-21 03:56:31.836621305 +0000 UTC m=+167.064941794" Apr 21 03:56:33.240874 kubelet[2803]: E0421 03:56:33.239705 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:56:34.050828 sshd[5555]: Connection closed by 10.0.0.1 port 37840 Apr 21 03:56:34.056602 sshd-session[5546]: pam_unix(sshd:session): session closed for user core Apr 21 03:56:34.116826 systemd[1]: sshd@17-10.0.0.123:22-10.0.0.1:37840.service: Deactivated successfully. Apr 21 03:56:34.196270 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 03:56:34.211894 systemd-logind[1551]: Session 18 logged out. Waiting for processes to exit. Apr 21 03:56:34.217651 systemd-logind[1551]: Removed session 18. Apr 21 03:56:39.162638 systemd[1]: Started sshd@18-10.0.0.123:22-10.0.0.1:59230.service - OpenSSH per-connection server daemon (10.0.0.1:59230). Apr 21 03:56:39.821491 update_engine[1556]: I20260421 03:56:39.781812 1556 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 21 03:56:39.853845 update_engine[1556]: I20260421 03:56:39.826360 1556 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 21 03:56:39.954513 update_engine[1556]: I20260421 03:56:39.949571 1556 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 21 03:56:39.968105 update_engine[1556]: E20260421 03:56:39.961941 1556 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 21 03:56:39.980105 update_engine[1556]: I20260421 03:56:39.967015 1556 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 21 03:56:40.638425 sshd[5658]: Accepted publickey for core from 10.0.0.1 port 59230 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:56:40.652747 sshd-session[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:56:40.958077 systemd-logind[1551]: New session 19 of user core. Apr 21 03:56:40.977544 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 03:56:43.651906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2551351516.mount: Deactivated successfully. Apr 21 03:56:44.157325 sshd[5661]: Connection closed by 10.0.0.1 port 59230 Apr 21 03:56:44.161701 sshd-session[5658]: pam_unix(sshd:session): session closed for user core Apr 21 03:56:44.287858 systemd[1]: Started sshd@19-10.0.0.123:22-10.0.0.1:59232.service - OpenSSH per-connection server daemon (10.0.0.1:59232). Apr 21 03:56:44.359350 systemd[1]: sshd@18-10.0.0.123:22-10.0.0.1:59230.service: Deactivated successfully. Apr 21 03:56:44.552442 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 03:56:44.556927 systemd[1]: session-19.scope: Consumed 1.172s CPU time, 30M memory peak. Apr 21 03:56:44.624547 systemd-logind[1551]: Session 19 logged out. Waiting for processes to exit. Apr 21 03:56:44.657538 systemd-logind[1551]: Removed session 19. Apr 21 03:56:45.608693 sshd[5679]: Accepted publickey for core from 10.0.0.1 port 59232 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:56:45.615133 sshd-session[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:56:45.765492 systemd-logind[1551]: New session 20 of user core. Apr 21 03:56:45.887941 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 03:56:48.549629 sshd[5687]: Connection closed by 10.0.0.1 port 59232 Apr 21 03:56:48.574773 sshd-session[5679]: pam_unix(sshd:session): session closed for user core Apr 21 03:56:48.747722 systemd[1]: Started sshd@20-10.0.0.123:22-10.0.0.1:49006.service - OpenSSH per-connection server daemon (10.0.0.1:49006). Apr 21 03:56:49.031111 systemd[1]: sshd@19-10.0.0.123:22-10.0.0.1:59232.service: Deactivated successfully. Apr 21 03:56:49.149082 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 03:56:49.155599 systemd[1]: session-20.scope: Consumed 1.058s CPU time, 23.9M memory peak. Apr 21 03:56:49.213232 systemd-logind[1551]: Session 20 logged out. Waiting for processes to exit. Apr 21 03:56:49.274660 systemd-logind[1551]: Removed session 20. Apr 21 03:56:49.777968 sshd[5701]: Accepted publickey for core from 10.0.0.1 port 49006 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:56:49.780311 update_engine[1556]: I20260421 03:56:49.763321 1556 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 21 03:56:49.912327 update_engine[1556]: I20260421 03:56:49.814075 1556 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 21 03:56:49.912327 update_engine[1556]: I20260421 03:56:49.817453 1556 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 21 03:56:49.912327 update_engine[1556]: E20260421 03:56:49.825075 1556 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 21 03:56:49.912327 update_engine[1556]: I20260421 03:56:49.825520 1556 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 21 03:56:49.878891 sshd-session[5701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:56:50.108813 systemd-logind[1551]: New session 21 of user core. Apr 21 03:56:50.123218 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 03:56:52.272573 sshd[5708]: Connection closed by 10.0.0.1 port 49006 Apr 21 03:56:52.275111 sshd-session[5701]: pam_unix(sshd:session): session closed for user core Apr 21 03:56:52.446459 systemd[1]: sshd@20-10.0.0.123:22-10.0.0.1:49006.service: Deactivated successfully. Apr 21 03:56:52.484140 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 03:56:52.579296 systemd-logind[1551]: Session 21 logged out. Waiting for processes to exit. Apr 21 03:56:52.633561 systemd-logind[1551]: Removed session 21. Apr 21 03:56:54.142776 containerd[1583]: time="2026-04-21T03:56:54.138680320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:56:54.142776 containerd[1583]: time="2026-04-21T03:56:54.141908948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.5: active requests=0, bytes read=53086083" Apr 21 03:56:54.162595 containerd[1583]: time="2026-04-21T03:56:54.161075603Z" level=info msg="ImageCreate event name:\"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:56:54.216840 containerd[1583]: time="2026-04-21T03:56:54.216357113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:56:54.236494 containerd[1583]: time="2026-04-21T03:56:54.234945142Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" with image id \"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:edfd1b6c377013f23afd5e76cb975b6cb59d1bc6554f79c0719d617f8dd0468e\", size \"53085929\" in 25.60525669s" Apr 21 03:56:54.236494 containerd[1583]: time="2026-04-21T03:56:54.236426914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.5\" returns image reference \"sha256:c7fd07b105db0e1cb9381872c0af21769c4fad1e0a5dab3a06b15a879b74b421\"" Apr 21 03:56:54.322491 containerd[1583]: time="2026-04-21T03:56:54.320641186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\"" Apr 21 03:56:54.530843 containerd[1583]: time="2026-04-21T03:56:54.480107132Z" level=info msg="CreateContainer within sandbox \"789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 03:56:54.615490 containerd[1583]: time="2026-04-21T03:56:54.608918403Z" level=info msg="Container 04a2c2f2b824b4e7d71a338b084c82d2bab8ba535435214b2e7a4d1bbc3016c0: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:56:54.770646 containerd[1583]: time="2026-04-21T03:56:54.769779467Z" level=info msg="CreateContainer within sandbox \"789436304fac352671f9287142b702758ca4fdcdf82c67928b9e036049383bea\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"04a2c2f2b824b4e7d71a338b084c82d2bab8ba535435214b2e7a4d1bbc3016c0\"" Apr 21 03:56:54.826822 containerd[1583]: time="2026-04-21T03:56:54.824377004Z" level=info msg="StartContainer for \"04a2c2f2b824b4e7d71a338b084c82d2bab8ba535435214b2e7a4d1bbc3016c0\"" Apr 21 03:56:54.875422 containerd[1583]: time="2026-04-21T03:56:54.875309542Z" level=info msg="connecting to shim 04a2c2f2b824b4e7d71a338b084c82d2bab8ba535435214b2e7a4d1bbc3016c0" address="unix:///run/containerd/s/230f8182eebb41c270e9c443e48930f9ea58b09ead2568cbda66122cfff1856b" protocol=ttrpc version=3 Apr 21 03:56:55.192032 systemd[1]: Started cri-containerd-04a2c2f2b824b4e7d71a338b084c82d2bab8ba535435214b2e7a4d1bbc3016c0.scope - libcontainer container 04a2c2f2b824b4e7d71a338b084c82d2bab8ba535435214b2e7a4d1bbc3016c0. Apr 21 03:56:56.061020 containerd[1583]: time="2026-04-21T03:56:56.060623123Z" level=info msg="StartContainer for \"04a2c2f2b824b4e7d71a338b084c82d2bab8ba535435214b2e7a4d1bbc3016c0\" returns successfully" Apr 21 03:56:57.304730 kubelet[2803]: I0421 03:56:57.277554 2803 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-7fb6cdc5d9-gdrxk" podStartSLOduration=104.959756997 podStartE2EDuration="2m49.277328795s" podCreationTimestamp="2026-04-21 03:54:08 +0000 UTC" firstStartedPulling="2026-04-21 03:55:49.953754687 +0000 UTC m=+125.182075182" lastFinishedPulling="2026-04-21 03:56:54.271326486 +0000 UTC m=+189.499646980" observedRunningTime="2026-04-21 03:56:57.267571296 +0000 UTC m=+192.495891795" watchObservedRunningTime="2026-04-21 03:56:57.277328795 +0000 UTC m=+192.505649296" Apr 21 03:56:57.396141 systemd[1]: Started sshd@21-10.0.0.123:22-10.0.0.1:44304.service - OpenSSH per-connection server daemon (10.0.0.1:44304). Apr 21 03:56:58.148327 sshd[5765]: Accepted publickey for core from 10.0.0.1 port 44304 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:56:58.176605 sshd-session[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:56:58.252979 systemd-logind[1551]: New session 22 of user core. Apr 21 03:56:58.277983 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 03:56:59.271579 kubelet[2803]: E0421 03:56:59.263951 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:56:59.771755 update_engine[1556]: I20260421 03:56:59.770785 1556 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 21 03:56:59.771755 update_engine[1556]: I20260421 03:56:59.771718 1556 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 21 03:56:59.878968 update_engine[1556]: I20260421 03:56:59.778056 1556 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 21 03:56:59.878968 update_engine[1556]: E20260421 03:56:59.841529 1556 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 21 03:56:59.878968 update_engine[1556]: I20260421 03:56:59.841917 1556 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 21 03:56:59.878968 update_engine[1556]: I20260421 03:56:59.841935 1556 omaha_request_action.cc:617] Omaha request response: Apr 21 03:56:59.878968 update_engine[1556]: E20260421 03:56:59.843459 1556 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 21 03:56:59.878968 update_engine[1556]: I20260421 03:56:59.870538 1556 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 21 03:56:59.878968 update_engine[1556]: I20260421 03:56:59.870738 1556 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 21 03:56:59.878968 update_engine[1556]: I20260421 03:56:59.870750 1556 update_attempter.cc:306] Processing Done. Apr 21 03:56:59.878968 update_engine[1556]: E20260421 03:56:59.872459 1556 update_attempter.cc:619] Update failed. Apr 21 03:56:59.878968 update_engine[1556]: I20260421 03:56:59.873064 1556 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 21 03:56:59.878968 update_engine[1556]: I20260421 03:56:59.873082 1556 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 21 03:56:59.878968 update_engine[1556]: I20260421 03:56:59.873111 1556 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 21 03:56:59.882446 update_engine[1556]: I20260421 03:56:59.875900 1556 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 21 03:56:59.882446 update_engine[1556]: I20260421 03:56:59.880353 1556 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 21 03:56:59.882446 update_engine[1556]: I20260421 03:56:59.880446 1556 omaha_request_action.cc:272] Request: Apr 21 03:56:59.882446 update_engine[1556]: Apr 21 03:56:59.882446 update_engine[1556]: Apr 21 03:56:59.882446 update_engine[1556]: Apr 21 03:56:59.882446 update_engine[1556]: Apr 21 03:56:59.882446 update_engine[1556]: Apr 21 03:56:59.882446 update_engine[1556]: Apr 21 03:56:59.882446 update_engine[1556]: I20260421 03:56:59.880455 1556 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 21 03:56:59.882446 update_engine[1556]: I20260421 03:56:59.880532 1556 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 21 03:56:59.883099 update_engine[1556]: I20260421 03:56:59.883020 1556 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 21 03:56:59.896021 update_engine[1556]: E20260421 03:56:59.895366 1556 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 21 03:56:59.896021 update_engine[1556]: I20260421 03:56:59.895676 1556 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 21 03:56:59.896021 update_engine[1556]: I20260421 03:56:59.895695 1556 omaha_request_action.cc:617] Omaha request response: Apr 21 03:56:59.896021 update_engine[1556]: I20260421 03:56:59.895716 1556 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 21 03:56:59.896021 update_engine[1556]: I20260421 03:56:59.895730 1556 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 21 03:56:59.896021 update_engine[1556]: I20260421 03:56:59.895736 1556 update_attempter.cc:306] Processing Done. Apr 21 03:56:59.896021 update_engine[1556]: I20260421 03:56:59.895744 1556 update_attempter.cc:310] Error event sent. Apr 21 03:56:59.896021 update_engine[1556]: I20260421 03:56:59.895768 1556 update_check_scheduler.cc:74] Next update check in 45m17s Apr 21 03:56:59.916953 locksmithd[1604]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 21 03:56:59.904500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1911135522.mount: Deactivated successfully. Apr 21 03:56:59.945610 locksmithd[1604]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 21 03:57:00.094612 containerd[1583]: time="2026-04-21T03:57:00.093919267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.5: active requests=0, bytes read=17000660" Apr 21 03:57:00.149532 containerd[1583]: time="2026-04-21T03:57:00.148892331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" with image id \"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\", size \"17000490\" in 5.82744632s" Apr 21 03:57:00.149532 containerd[1583]: time="2026-04-21T03:57:00.149445240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" returns image reference \"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\"" Apr 21 03:57:00.157452 containerd[1583]: time="2026-04-21T03:57:00.157391956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:57:00.159251 containerd[1583]: time="2026-04-21T03:57:00.158837654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\"" Apr 21 03:57:00.168589 containerd[1583]: time="2026-04-21T03:57:00.164409829Z" level=info msg="ImageCreate event name:\"sha256:32cfe8e323c5b51d8f6311b045681721ff6e6745a1c5b74bf0f0a3cdc1a7b5d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:57:00.297641 containerd[1583]: time="2026-04-21T03:57:00.295957442Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:0bec142ebaa70bcdda5553c7316abcef9cb60a35c2e3ed16b75f26313de91eed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:57:00.365498 containerd[1583]: time="2026-04-21T03:57:00.343031492Z" level=info msg="CreateContainer within sandbox \"2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 03:57:00.527492 sshd[5773]: Connection closed by 10.0.0.1 port 44304 Apr 21 03:57:00.525319 sshd-session[5765]: pam_unix(sshd:session): session closed for user core Apr 21 03:57:00.579944 systemd[1]: sshd@21-10.0.0.123:22-10.0.0.1:44304.service: Deactivated successfully. Apr 21 03:57:00.605762 containerd[1583]: time="2026-04-21T03:57:00.598567514Z" level=info msg="Container 10c40692f9778625716680ed387e02e16e8e5b6de5e93919c3ef5c3abfb1cc90: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:57:00.678678 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 03:57:00.735799 systemd-logind[1551]: Session 22 logged out. Waiting for processes to exit. Apr 21 03:57:00.783558 systemd-logind[1551]: Removed session 22. Apr 21 03:57:00.832101 containerd[1583]: time="2026-04-21T03:57:00.831810502Z" level=info msg="CreateContainer within sandbox \"2ead68dcb9e8f623fd264715f16e09d48212964ae446d8ad7c9168a6b7bf2a72\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"10c40692f9778625716680ed387e02e16e8e5b6de5e93919c3ef5c3abfb1cc90\"" Apr 21 03:57:00.839130 containerd[1583]: time="2026-04-21T03:57:00.838443120Z" level=info msg="StartContainer for \"10c40692f9778625716680ed387e02e16e8e5b6de5e93919c3ef5c3abfb1cc90\"" Apr 21 03:57:00.869652 containerd[1583]: time="2026-04-21T03:57:00.866112178Z" level=info msg="connecting to shim 10c40692f9778625716680ed387e02e16e8e5b6de5e93919c3ef5c3abfb1cc90" address="unix:///run/containerd/s/aaa1c6525e9544c249c2779721d8c0ca584c708454e8c8b59dcc735aa8fc1db6" protocol=ttrpc version=3 Apr 21 03:57:01.160581 systemd[1]: Started cri-containerd-10c40692f9778625716680ed387e02e16e8e5b6de5e93919c3ef5c3abfb1cc90.scope - libcontainer container 10c40692f9778625716680ed387e02e16e8e5b6de5e93919c3ef5c3abfb1cc90. Apr 21 03:57:01.582734 containerd[1583]: time="2026-04-21T03:57:01.581694066Z" level=info msg="StartContainer for \"10c40692f9778625716680ed387e02e16e8e5b6de5e93919c3ef5c3abfb1cc90\" returns successfully" Apr 21 03:57:02.757776 kubelet[2803]: I0421 03:57:02.754840 2803 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-79c9d75764-r8nzx" podStartSLOduration=23.098114273 podStartE2EDuration="1m56.754799913s" podCreationTimestamp="2026-04-21 03:55:06 +0000 UTC" firstStartedPulling="2026-04-21 03:55:26.501441055 +0000 UTC m=+101.729761552" lastFinishedPulling="2026-04-21 03:57:00.15812669 +0000 UTC m=+195.386447192" observedRunningTime="2026-04-21 03:57:02.729018739 +0000 UTC m=+197.957339243" watchObservedRunningTime="2026-04-21 03:57:02.754799913 +0000 UTC m=+197.983120417" Apr 21 03:57:05.513458 containerd[1583]: time="2026-04-21T03:57:05.511924226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5: active requests=0, bytes read=13498053" Apr 21 03:57:05.513458 containerd[1583]: time="2026-04-21T03:57:05.513298804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:57:05.527870 containerd[1583]: time="2026-04-21T03:57:05.524248424Z" level=info msg="ImageCreate event name:\"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:57:05.545702 containerd[1583]: time="2026-04-21T03:57:05.542557847Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 03:57:05.545702 containerd[1583]: time="2026-04-21T03:57:05.544645598Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" with image id \"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:26849483b0c4d797a8ff818d988924bdf696996ca559c8c56b647aaaf70a448a\", size \"16459430\" in 5.385762694s" Apr 21 03:57:05.545702 containerd[1583]: time="2026-04-21T03:57:05.544730101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.5\" returns image reference \"sha256:c4d89610d9eecf5b8a3542441aa9a40814ec45484688b6f68d6fe8aee64beb80\"" Apr 21 03:57:05.580232 systemd[1]: Started sshd@22-10.0.0.123:22-10.0.0.1:40604.service - OpenSSH per-connection server daemon (10.0.0.1:40604). Apr 21 03:57:05.612694 containerd[1583]: time="2026-04-21T03:57:05.611089902Z" level=info msg="CreateContainer within sandbox \"94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 03:57:05.740638 containerd[1583]: time="2026-04-21T03:57:05.740458733Z" level=info msg="Container b04a2c8d0aa9efbde0837950dd81cd811c96c1dd42afb90fe8e009a7a621ac5c: CDI devices from CRI Config.CDIDevices: []" Apr 21 03:57:05.741946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4154592928.mount: Deactivated successfully. Apr 21 03:57:05.949310 containerd[1583]: time="2026-04-21T03:57:05.945100935Z" level=info msg="CreateContainer within sandbox \"94abbad563a1b24dccdfa2617b0b269ef95981ee565c8327634b0740c99b311a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b04a2c8d0aa9efbde0837950dd81cd811c96c1dd42afb90fe8e009a7a621ac5c\"" Apr 21 03:57:05.957621 containerd[1583]: time="2026-04-21T03:57:05.957437953Z" level=info msg="StartContainer for \"b04a2c8d0aa9efbde0837950dd81cd811c96c1dd42afb90fe8e009a7a621ac5c\"" Apr 21 03:57:06.038859 containerd[1583]: time="2026-04-21T03:57:06.037118730Z" level=info msg="connecting to shim b04a2c8d0aa9efbde0837950dd81cd811c96c1dd42afb90fe8e009a7a621ac5c" address="unix:///run/containerd/s/1cf32a5fc1b354d774abecc1a07157675ae5a626f626b63261fd1cd75fca4c44" protocol=ttrpc version=3 Apr 21 03:57:06.055362 sshd[5910]: Accepted publickey for core from 10.0.0.1 port 40604 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:57:06.066697 sshd-session[5910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:57:06.151765 systemd-logind[1551]: New session 23 of user core. Apr 21 03:57:06.175057 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 21 03:57:06.246773 systemd[1]: Started cri-containerd-b04a2c8d0aa9efbde0837950dd81cd811c96c1dd42afb90fe8e009a7a621ac5c.scope - libcontainer container b04a2c8d0aa9efbde0837950dd81cd811c96c1dd42afb90fe8e009a7a621ac5c. Apr 21 03:57:07.115880 containerd[1583]: time="2026-04-21T03:57:07.115713972Z" level=info msg="StartContainer for \"b04a2c8d0aa9efbde0837950dd81cd811c96c1dd42afb90fe8e009a7a621ac5c\" returns successfully" Apr 21 03:57:07.408568 sshd[5941]: Connection closed by 10.0.0.1 port 40604 Apr 21 03:57:07.401794 sshd-session[5910]: pam_unix(sshd:session): session closed for user core Apr 21 03:57:07.427443 systemd[1]: sshd@22-10.0.0.123:22-10.0.0.1:40604.service: Deactivated successfully. Apr 21 03:57:07.586663 systemd[1]: session-23.scope: Deactivated successfully. Apr 21 03:57:07.639764 systemd-logind[1551]: Session 23 logged out. Waiting for processes to exit. Apr 21 03:57:07.675950 kubelet[2803]: I0421 03:57:07.672294 2803 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 03:57:07.675950 kubelet[2803]: I0421 03:57:07.672398 2803 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 03:57:07.678123 systemd-logind[1551]: Removed session 23. Apr 21 03:57:08.308120 kubelet[2803]: I0421 03:57:08.307125 2803 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-c26j7" podStartSLOduration=86.03934856 podStartE2EDuration="2m57.307096532s" podCreationTimestamp="2026-04-21 03:54:11 +0000 UTC" firstStartedPulling="2026-04-21 03:55:34.29788015 +0000 UTC m=+109.526200647" lastFinishedPulling="2026-04-21 03:57:05.565628113 +0000 UTC m=+200.793948619" observedRunningTime="2026-04-21 03:57:08.305859264 +0000 UTC m=+203.534179778" watchObservedRunningTime="2026-04-21 03:57:08.307096532 +0000 UTC m=+203.535417032" Apr 21 03:57:09.264986 kubelet[2803]: E0421 03:57:09.260938 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:57:12.483297 systemd[1]: Started sshd@23-10.0.0.123:22-10.0.0.1:40612.service - OpenSSH per-connection server daemon (10.0.0.1:40612). Apr 21 03:57:12.761404 sshd[6000]: Accepted publickey for core from 10.0.0.1 port 40612 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:57:12.761749 sshd-session[6000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:57:12.800515 systemd-logind[1551]: New session 24 of user core. Apr 21 03:57:12.830120 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 21 03:57:13.327366 sshd[6003]: Connection closed by 10.0.0.1 port 40612 Apr 21 03:57:13.328609 sshd-session[6000]: pam_unix(sshd:session): session closed for user core Apr 21 03:57:13.368283 systemd[1]: sshd@23-10.0.0.123:22-10.0.0.1:40612.service: Deactivated successfully. Apr 21 03:57:13.412714 systemd[1]: session-24.scope: Deactivated successfully. Apr 21 03:57:13.430649 systemd-logind[1551]: Session 24 logged out. Waiting for processes to exit. Apr 21 03:57:13.511403 systemd-logind[1551]: Removed session 24. Apr 21 03:57:18.217842 kubelet[2803]: E0421 03:57:18.217394 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:57:18.429910 systemd[1]: Started sshd@24-10.0.0.123:22-10.0.0.1:41708.service - OpenSSH per-connection server daemon (10.0.0.1:41708). Apr 21 03:57:18.812762 sshd[6019]: Accepted publickey for core from 10.0.0.1 port 41708 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:57:18.836074 sshd-session[6019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:57:18.857252 systemd-logind[1551]: New session 25 of user core. Apr 21 03:57:18.866943 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 21 03:57:19.177065 sshd[6024]: Connection closed by 10.0.0.1 port 41708 Apr 21 03:57:19.179935 sshd-session[6019]: pam_unix(sshd:session): session closed for user core Apr 21 03:57:19.196830 systemd[1]: sshd@24-10.0.0.123:22-10.0.0.1:41708.service: Deactivated successfully. Apr 21 03:57:19.210657 systemd[1]: session-25.scope: Deactivated successfully. Apr 21 03:57:19.214800 systemd-logind[1551]: Session 25 logged out. Waiting for processes to exit. Apr 21 03:57:19.224004 systemd[1]: Started sshd@25-10.0.0.123:22-10.0.0.1:41720.service - OpenSSH per-connection server daemon (10.0.0.1:41720). Apr 21 03:57:19.276568 systemd-logind[1551]: Removed session 25. Apr 21 03:57:19.438902 sshd[6037]: Accepted publickey for core from 10.0.0.1 port 41720 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:57:19.446881 sshd-session[6037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:57:19.472475 systemd-logind[1551]: New session 26 of user core. Apr 21 03:57:19.523076 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 21 03:57:20.657579 sshd[6040]: Connection closed by 10.0.0.1 port 41720 Apr 21 03:57:20.659125 sshd-session[6037]: pam_unix(sshd:session): session closed for user core Apr 21 03:57:20.677971 systemd[1]: sshd@25-10.0.0.123:22-10.0.0.1:41720.service: Deactivated successfully. Apr 21 03:57:20.693853 systemd[1]: session-26.scope: Deactivated successfully. Apr 21 03:57:20.699776 systemd-logind[1551]: Session 26 logged out. Waiting for processes to exit. Apr 21 03:57:20.706274 systemd[1]: Started sshd@26-10.0.0.123:22-10.0.0.1:41728.service - OpenSSH per-connection server daemon (10.0.0.1:41728). Apr 21 03:57:20.708911 systemd-logind[1551]: Removed session 26. Apr 21 03:57:20.817814 sshd[6055]: Accepted publickey for core from 10.0.0.1 port 41728 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:57:20.822026 sshd-session[6055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:57:20.835843 systemd-logind[1551]: New session 27 of user core. Apr 21 03:57:20.856867 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 21 03:57:21.356411 kubelet[2803]: E0421 03:57:21.350438 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:57:23.559475 sshd[6058]: Connection closed by 10.0.0.1 port 41728 Apr 21 03:57:23.556360 sshd-session[6055]: pam_unix(sshd:session): session closed for user core Apr 21 03:57:23.655001 systemd[1]: sshd@26-10.0.0.123:22-10.0.0.1:41728.service: Deactivated successfully. Apr 21 03:57:23.700680 systemd[1]: session-27.scope: Deactivated successfully. Apr 21 03:57:23.707629 systemd[1]: session-27.scope: Consumed 1.478s CPU time, 38.4M memory peak. Apr 21 03:57:23.710655 systemd-logind[1551]: Session 27 logged out. Waiting for processes to exit. Apr 21 03:57:23.754084 systemd[1]: Started sshd@27-10.0.0.123:22-10.0.0.1:41742.service - OpenSSH per-connection server daemon (10.0.0.1:41742). Apr 21 03:57:23.801355 systemd-logind[1551]: Removed session 27. Apr 21 03:57:24.294953 sshd[6079]: Accepted publickey for core from 10.0.0.1 port 41742 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:57:24.300684 sshd-session[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:57:24.360644 systemd-logind[1551]: New session 28 of user core. Apr 21 03:57:24.374445 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 21 03:57:26.544416 sshd[6082]: Connection closed by 10.0.0.1 port 41742 Apr 21 03:57:26.556371 sshd-session[6079]: pam_unix(sshd:session): session closed for user core Apr 21 03:57:26.615253 systemd[1]: Started sshd@28-10.0.0.123:22-10.0.0.1:53910.service - OpenSSH per-connection server daemon (10.0.0.1:53910). Apr 21 03:57:26.657389 systemd[1]: sshd@27-10.0.0.123:22-10.0.0.1:41742.service: Deactivated successfully. Apr 21 03:57:26.749261 systemd[1]: session-28.scope: Deactivated successfully. Apr 21 03:57:26.793640 systemd-logind[1551]: Session 28 logged out. Waiting for processes to exit. Apr 21 03:57:26.917536 systemd-logind[1551]: Removed session 28. Apr 21 03:57:27.068498 sshd[6093]: Accepted publickey for core from 10.0.0.1 port 53910 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:57:27.071835 sshd-session[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:57:27.270800 systemd-logind[1551]: New session 29 of user core. Apr 21 03:57:27.275142 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 21 03:57:28.107732 sshd[6103]: Connection closed by 10.0.0.1 port 53910 Apr 21 03:57:28.108281 sshd-session[6093]: pam_unix(sshd:session): session closed for user core Apr 21 03:57:28.154332 systemd[1]: sshd@28-10.0.0.123:22-10.0.0.1:53910.service: Deactivated successfully. Apr 21 03:57:28.206866 systemd[1]: session-29.scope: Deactivated successfully. Apr 21 03:57:28.222816 systemd-logind[1551]: Session 29 logged out. Waiting for processes to exit. Apr 21 03:57:28.274091 systemd-logind[1551]: Removed session 29. Apr 21 03:57:32.218205 kubelet[2803]: E0421 03:57:32.218093 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:57:33.197231 systemd[1]: Started sshd@29-10.0.0.123:22-10.0.0.1:53916.service - OpenSSH per-connection server daemon (10.0.0.1:53916). Apr 21 03:57:33.584057 sshd[6165]: Accepted publickey for core from 10.0.0.1 port 53916 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:57:33.621835 sshd-session[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:57:33.722630 systemd-logind[1551]: New session 30 of user core. Apr 21 03:57:33.792532 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 21 03:57:34.227108 kubelet[2803]: E0421 03:57:34.225811 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:57:34.483453 sshd[6168]: Connection closed by 10.0.0.1 port 53916 Apr 21 03:57:34.484808 sshd-session[6165]: pam_unix(sshd:session): session closed for user core Apr 21 03:57:34.520064 systemd[1]: sshd@29-10.0.0.123:22-10.0.0.1:53916.service: Deactivated successfully. Apr 21 03:57:34.570683 systemd[1]: session-30.scope: Deactivated successfully. Apr 21 03:57:34.645702 systemd-logind[1551]: Session 30 logged out. Waiting for processes to exit. Apr 21 03:57:34.679285 systemd-logind[1551]: Removed session 30. Apr 21 03:57:39.527989 systemd[1]: Started sshd@30-10.0.0.123:22-10.0.0.1:34944.service - OpenSSH per-connection server daemon (10.0.0.1:34944). Apr 21 03:57:40.128065 sshd[6232]: Accepted publickey for core from 10.0.0.1 port 34944 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:57:40.132649 sshd-session[6232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:57:40.211406 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 21 03:57:40.214678 systemd-logind[1551]: New session 31 of user core. Apr 21 03:57:40.650268 sshd[6237]: Connection closed by 10.0.0.1 port 34944 Apr 21 03:57:40.650892 sshd-session[6232]: pam_unix(sshd:session): session closed for user core Apr 21 03:57:40.664791 systemd[1]: sshd@30-10.0.0.123:22-10.0.0.1:34944.service: Deactivated successfully. Apr 21 03:57:40.671358 systemd[1]: session-31.scope: Deactivated successfully. Apr 21 03:57:40.673092 systemd-logind[1551]: Session 31 logged out. Waiting for processes to exit. Apr 21 03:57:40.675808 systemd-logind[1551]: Removed session 31. Apr 21 03:57:45.752818 systemd[1]: Started sshd@31-10.0.0.123:22-10.0.0.1:33152.service - OpenSSH per-connection server daemon (10.0.0.1:33152). Apr 21 03:57:45.857215 sshd[6254]: Accepted publickey for core from 10.0.0.1 port 33152 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:57:45.862541 sshd-session[6254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:57:45.884477 systemd-logind[1551]: New session 32 of user core. Apr 21 03:57:45.894087 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 21 03:57:46.119632 sshd[6257]: Connection closed by 10.0.0.1 port 33152 Apr 21 03:57:46.121007 sshd-session[6254]: pam_unix(sshd:session): session closed for user core Apr 21 03:57:46.127732 systemd[1]: sshd@31-10.0.0.123:22-10.0.0.1:33152.service: Deactivated successfully. Apr 21 03:57:46.135130 systemd[1]: session-32.scope: Deactivated successfully. Apr 21 03:57:46.137748 systemd-logind[1551]: Session 32 logged out. Waiting for processes to exit. Apr 21 03:57:46.150750 systemd-logind[1551]: Removed session 32. Apr 21 03:57:51.244452 systemd[1]: Started sshd@32-10.0.0.123:22-10.0.0.1:33162.service - OpenSSH per-connection server daemon (10.0.0.1:33162). Apr 21 03:57:51.271383 kubelet[2803]: E0421 03:57:51.267995 2803 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 21 03:57:51.621850 sshd[6272]: Accepted publickey for core from 10.0.0.1 port 33162 ssh2: RSA SHA256:pu164vRLVIGM+iiPNPxqhji89LThTcISCPvpkIb2Elg Apr 21 03:57:51.658752 sshd-session[6272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 03:57:51.771897 systemd-logind[1551]: New session 33 of user core. Apr 21 03:57:51.788905 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 21 03:57:53.187992 sshd[6275]: Connection closed by 10.0.0.1 port 33162 Apr 21 03:57:53.198389 sshd-session[6272]: pam_unix(sshd:session): session closed for user core Apr 21 03:57:53.262294 systemd[1]: sshd@32-10.0.0.123:22-10.0.0.1:33162.service: Deactivated successfully. Apr 21 03:57:53.342679 systemd[1]: session-33.scope: Deactivated successfully. Apr 21 03:57:53.370850 systemd-logind[1551]: Session 33 logged out. Waiting for processes to exit. Apr 21 03:57:53.418593 systemd-logind[1551]: Removed session 33.