Apr 17 23:30:00.246136 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:30:00.246154 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:30:00.246163 kernel: BIOS-provided physical RAM map: Apr 17 23:30:00.246167 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 17 23:30:00.246172 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 17 23:30:00.246176 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 17 23:30:00.246181 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 17 23:30:00.246186 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 17 23:30:00.246190 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 17 23:30:00.246196 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 17 23:30:00.246200 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 17 23:30:00.246204 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 17 23:30:00.246209 kernel: NX (Execute Disable) protection: active Apr 17 23:30:00.246213 kernel: APIC: Static calls initialized Apr 17 23:30:00.246219 kernel: SMBIOS 2.8 present. Apr 17 23:30:00.246225 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 17 23:30:00.246230 kernel: Hypervisor detected: KVM Apr 17 23:30:00.246235 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:30:00.246239 kernel: kvm-clock: using sched offset of 4897714862 cycles Apr 17 23:30:00.246244 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:30:00.246249 kernel: tsc: Detected 2793.438 MHz processor Apr 17 23:30:00.246254 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:30:00.246259 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:30:00.246264 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 17 23:30:00.246270 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 17 23:30:00.246275 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:30:00.246280 kernel: Using GB pages for direct mapping Apr 17 23:30:00.246285 kernel: ACPI: Early table checksum verification disabled Apr 17 23:30:00.246355 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 17 23:30:00.246360 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:30:00.246365 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:30:00.246370 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:30:00.246375 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 17 23:30:00.246381 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:30:00.246388 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:30:00.246396 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:30:00.246404 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:30:00.246412 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 17 23:30:00.246419 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 17 23:30:00.246427 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 17 23:30:00.246440 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 17 23:30:00.246451 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 17 23:30:00.246461 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 17 23:30:00.246469 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 17 23:30:00.246482 kernel: No NUMA configuration found Apr 17 23:30:00.246487 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 17 23:30:00.246492 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 17 23:30:00.246498 kernel: Zone ranges: Apr 17 23:30:00.246503 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:30:00.246509 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 17 23:30:00.246513 kernel: Normal empty Apr 17 23:30:00.246518 kernel: Movable zone start for each node Apr 17 23:30:00.246523 kernel: Early memory node ranges Apr 17 23:30:00.246528 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 17 23:30:00.246534 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 17 23:30:00.246538 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 17 23:30:00.246543 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:30:00.246550 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 17 23:30:00.246555 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 17 23:30:00.246560 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 23:30:00.246565 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:30:00.246570 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:30:00.246575 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 23:30:00.246580 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:30:00.246585 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:30:00.246590 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:30:00.246596 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:30:00.246601 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:30:00.246606 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 23:30:00.246611 kernel: TSC deadline timer available Apr 17 23:30:00.246645 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 17 23:30:00.246651 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:30:00.246656 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 17 23:30:00.246661 kernel: kvm-guest: setup PV sched yield Apr 17 23:30:00.246666 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 17 23:30:00.246673 kernel: Booting paravirtualized kernel on KVM Apr 17 23:30:00.246678 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:30:00.246683 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 17 23:30:00.246688 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 17 23:30:00.246693 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 17 23:30:00.246698 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 17 23:30:00.246703 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:30:00.246708 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:30:00.246714 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:30:00.246721 kernel: random: crng init done Apr 17 23:30:00.246726 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:30:00.246731 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:30:00.246736 kernel: Fallback order for Node 0: 0 Apr 17 23:30:00.246741 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 17 23:30:00.246746 kernel: Policy zone: DMA32 Apr 17 23:30:00.246751 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:30:00.246756 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 137896K reserved, 0K cma-reserved) Apr 17 23:30:00.246763 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 17 23:30:00.246768 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:30:00.246773 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:30:00.246778 kernel: Dynamic Preempt: voluntary Apr 17 23:30:00.246783 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:30:00.246789 kernel: rcu: RCU event tracing is enabled. Apr 17 23:30:00.246794 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 17 23:30:00.246799 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:30:00.246804 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:30:00.246811 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:30:00.246816 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:30:00.246852 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 17 23:30:00.246858 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 17 23:30:00.246863 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:30:00.246868 kernel: Console: colour VGA+ 80x25 Apr 17 23:30:00.246872 kernel: printk: console [ttyS0] enabled Apr 17 23:30:00.246877 kernel: ACPI: Core revision 20230628 Apr 17 23:30:00.246883 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 17 23:30:00.246889 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:30:00.246894 kernel: x2apic enabled Apr 17 23:30:00.246899 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:30:00.246904 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 17 23:30:00.246909 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 17 23:30:00.246914 kernel: kvm-guest: setup PV IPIs Apr 17 23:30:00.246919 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 17 23:30:00.246925 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 23:30:00.246936 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 17 23:30:00.246941 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 23:30:00.246947 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 17 23:30:00.246952 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 17 23:30:00.246959 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:30:00.246965 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:30:00.246970 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:30:00.246976 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 23:30:00.246983 kernel: RETBleed: Vulnerable Apr 17 23:30:00.246988 kernel: Speculative Store Bypass: Vulnerable Apr 17 23:30:00.246994 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:30:00.246999 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 23:30:00.247005 kernel: active return thunk: its_return_thunk Apr 17 23:30:00.247010 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:30:00.247016 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:30:00.247021 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:30:00.247027 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:30:00.247032 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 23:30:00.247039 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 23:30:00.247044 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 23:30:00.247050 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:30:00.247055 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 17 23:30:00.247060 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 17 23:30:00.247066 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 17 23:30:00.247071 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 17 23:30:00.247077 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:30:00.247084 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:30:00.247089 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:30:00.247095 kernel: landlock: Up and running. Apr 17 23:30:00.247100 kernel: SELinux: Initializing. Apr 17 23:30:00.247106 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:30:00.247112 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:30:00.247117 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 17 23:30:00.247123 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:30:00.247128 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:30:00.247135 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:30:00.247141 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 17 23:30:00.247146 kernel: signal: max sigframe size: 3632 Apr 17 23:30:00.247152 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:30:00.247157 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:30:00.247163 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:30:00.247168 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:30:00.247174 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:30:00.247179 kernel: .... node #0, CPUs: #1 #2 #3 Apr 17 23:30:00.247186 kernel: smp: Brought up 1 node, 4 CPUs Apr 17 23:30:00.247192 kernel: smpboot: Max logical packages: 1 Apr 17 23:30:00.247197 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 17 23:30:00.247203 kernel: devtmpfs: initialized Apr 17 23:30:00.247208 kernel: x86/mm: Memory block size: 128MB Apr 17 23:30:00.247214 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:30:00.247219 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 17 23:30:00.247225 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:30:00.247231 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:30:00.247237 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:30:00.247243 kernel: audit: type=2000 audit(1776468597.339:1): state=initialized audit_enabled=0 res=1 Apr 17 23:30:00.247248 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:30:00.247253 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:30:00.247259 kernel: cpuidle: using governor menu Apr 17 23:30:00.247264 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:30:00.247270 kernel: dca service started, version 1.12.1 Apr 17 23:30:00.247276 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 17 23:30:00.247281 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 17 23:30:00.247355 kernel: PCI: Using configuration type 1 for base access Apr 17 23:30:00.247361 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:30:00.247367 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:30:00.247372 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:30:00.247378 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:30:00.247385 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:30:00.247394 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:30:00.247403 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:30:00.247411 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:30:00.247423 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:30:00.247433 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:30:00.247442 kernel: ACPI: Interpreter enabled Apr 17 23:30:00.247452 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 23:30:00.247462 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:30:00.247481 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:30:00.247489 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:30:00.247495 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 23:30:00.247500 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:30:00.247613 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:30:00.247677 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 17 23:30:00.247732 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 17 23:30:00.247739 kernel: PCI host bridge to bus 0000:00 Apr 17 23:30:00.247797 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:30:00.247892 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:30:00.247945 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:30:00.247995 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 17 23:30:00.248068 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 17 23:30:00.248144 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 17 23:30:00.248220 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:30:00.248423 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 17 23:30:00.248530 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 17 23:30:00.248627 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 17 23:30:00.248722 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 17 23:30:00.248781 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 17 23:30:00.248885 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:30:00.248949 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 17 23:30:00.249005 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 17 23:30:00.249061 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 17 23:30:00.249120 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 17 23:30:00.249180 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 17 23:30:00.249236 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 17 23:30:00.249353 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 17 23:30:00.249429 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 17 23:30:00.249505 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 17 23:30:00.249564 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 17 23:30:00.249621 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 17 23:30:00.249676 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 17 23:30:00.249731 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 17 23:30:00.249794 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 17 23:30:00.249890 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 23:30:00.249984 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 17 23:30:00.250044 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 17 23:30:00.250099 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 17 23:30:00.250159 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 17 23:30:00.250213 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 17 23:30:00.250220 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:30:00.250226 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:30:00.250232 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:30:00.250237 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:30:00.250244 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 23:30:00.250250 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 23:30:00.250255 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 23:30:00.250261 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 23:30:00.250266 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 23:30:00.250272 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 23:30:00.250278 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 23:30:00.250283 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 23:30:00.250349 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 23:30:00.250356 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 23:30:00.250362 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 23:30:00.250368 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 23:30:00.250373 kernel: iommu: Default domain type: Translated Apr 17 23:30:00.250379 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:30:00.250386 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:30:00.250395 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:30:00.250405 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 17 23:30:00.250413 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 17 23:30:00.250498 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 23:30:00.250554 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 23:30:00.250609 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:30:00.250616 kernel: vgaarb: loaded Apr 17 23:30:00.250622 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 17 23:30:00.250628 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 17 23:30:00.250633 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:30:00.250638 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:30:00.250646 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:30:00.250652 kernel: pnp: PnP ACPI init Apr 17 23:30:00.250731 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 17 23:30:00.250744 kernel: pnp: PnP ACPI: found 6 devices Apr 17 23:30:00.250754 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:30:00.250764 kernel: NET: Registered PF_INET protocol family Apr 17 23:30:00.250773 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:30:00.250783 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 23:30:00.250792 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:30:00.250805 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:30:00.250815 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 23:30:00.250874 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 23:30:00.250883 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:30:00.250893 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:30:00.250903 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:30:00.250912 kernel: NET: Registered PF_XDP protocol family Apr 17 23:30:00.251007 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:30:00.251095 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:30:00.251174 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:30:00.251239 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 17 23:30:00.251371 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 17 23:30:00.251443 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 17 23:30:00.251456 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:30:00.251467 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:30:00.251477 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 23:30:00.251487 kernel: Initialise system trusted keyrings Apr 17 23:30:00.251502 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 23:30:00.251514 kernel: Key type asymmetric registered Apr 17 23:30:00.251521 kernel: Asymmetric key parser 'x509' registered Apr 17 23:30:00.251527 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:30:00.251532 kernel: io scheduler mq-deadline registered Apr 17 23:30:00.251538 kernel: io scheduler kyber registered Apr 17 23:30:00.251544 kernel: io scheduler bfq registered Apr 17 23:30:00.251549 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:30:00.251555 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 23:30:00.251562 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 23:30:00.251568 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 17 23:30:00.251574 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:30:00.251579 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:30:00.251585 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:30:00.251590 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:30:00.251596 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:30:00.251662 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 17 23:30:00.251671 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:30:00.251723 kernel: rtc_cmos 00:04: registered as rtc0 Apr 17 23:30:00.251774 kernel: rtc_cmos 00:04: setting system clock to 2026-04-17T23:29:59 UTC (1776468599) Apr 17 23:30:00.251868 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 17 23:30:00.251876 kernel: intel_pstate: CPU model not supported Apr 17 23:30:00.251881 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:30:00.251887 kernel: Segment Routing with IPv6 Apr 17 23:30:00.251892 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:30:00.251898 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:30:00.251905 kernel: Key type dns_resolver registered Apr 17 23:30:00.251911 kernel: IPI shorthand broadcast: enabled Apr 17 23:30:00.251917 kernel: sched_clock: Marking stable (1839031696, 397872586)->(2409012167, -172107885) Apr 17 23:30:00.251923 kernel: registered taskstats version 1 Apr 17 23:30:00.251928 kernel: Loading compiled-in X.509 certificates Apr 17 23:30:00.251934 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:30:00.251939 kernel: Key type .fscrypt registered Apr 17 23:30:00.251945 kernel: Key type fscrypt-provisioning registered Apr 17 23:30:00.251950 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:30:00.251957 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:30:00.251962 kernel: ima: No architecture policies found Apr 17 23:30:00.251968 kernel: clk: Disabling unused clocks Apr 17 23:30:00.251973 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:30:00.251979 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:30:00.251984 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:30:00.251990 kernel: Run /init as init process Apr 17 23:30:00.251996 kernel: with arguments: Apr 17 23:30:00.252001 kernel: /init Apr 17 23:30:00.252008 kernel: with environment: Apr 17 23:30:00.252013 kernel: HOME=/ Apr 17 23:30:00.252019 kernel: TERM=linux Apr 17 23:30:00.252026 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:30:00.252034 systemd[1]: Detected virtualization kvm. Apr 17 23:30:00.252040 systemd[1]: Detected architecture x86-64. Apr 17 23:30:00.252046 systemd[1]: Running in initrd. Apr 17 23:30:00.252052 systemd[1]: No hostname configured, using default hostname. Apr 17 23:30:00.252060 systemd[1]: Hostname set to . Apr 17 23:30:00.252066 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:30:00.252072 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:30:00.252078 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:30:00.252084 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:30:00.252090 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:30:00.252096 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:30:00.252103 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:30:00.252111 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:30:00.252127 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:30:00.252134 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:30:00.252140 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:30:00.252147 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:30:00.252154 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:30:00.252160 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:30:00.252166 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:30:00.252172 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:30:00.252178 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:30:00.252184 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:30:00.252190 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:30:00.252196 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:30:00.252204 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:30:00.252210 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:30:00.252217 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:30:00.252226 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:30:00.252236 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:30:00.252245 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:30:00.252255 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:30:00.252266 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:30:00.252279 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:30:00.252370 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:30:00.252383 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:30:00.252413 systemd-journald[194]: Collecting audit messages is disabled. Apr 17 23:30:00.252442 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:30:00.252455 systemd-journald[194]: Journal started Apr 17 23:30:00.252482 systemd-journald[194]: Runtime Journal (/run/log/journal/70d65b24c204425a8e7c59e76a38841c) is 6.0M, max 48.4M, 42.3M free. Apr 17 23:30:00.254491 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:30:00.259476 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:30:00.264114 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:30:00.278354 systemd-modules-load[195]: Inserted module 'overlay' Apr 17 23:30:00.561496 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:30:00.561529 kernel: Bridge firewalling registered Apr 17 23:30:00.283627 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:30:00.308275 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 17 23:30:00.568613 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:30:00.575119 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:30:00.580880 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:30:00.596485 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:30:00.606456 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:30:00.613238 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:30:00.620478 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:30:00.629464 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:30:00.639191 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:30:00.642016 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:30:00.662527 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:30:00.669463 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:30:00.678527 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:30:00.688703 dracut-cmdline[223]: dracut-dracut-053 Apr 17 23:30:00.688703 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:30:00.732712 systemd-resolved[234]: Positive Trust Anchors: Apr 17 23:30:00.732756 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:30:00.732780 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:30:00.734891 systemd-resolved[234]: Defaulting to hostname 'linux'. Apr 17 23:30:00.735657 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:30:00.739772 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:30:00.825495 kernel: SCSI subsystem initialized Apr 17 23:30:00.839767 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:30:00.857568 kernel: iscsi: registered transport (tcp) Apr 17 23:30:00.880890 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:30:00.880964 kernel: QLogic iSCSI HBA Driver Apr 17 23:30:00.923149 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:30:00.944551 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:30:00.976578 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:30:00.976669 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:30:00.980361 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:30:01.028430 kernel: raid6: avx512x4 gen() 40817 MB/s Apr 17 23:30:01.047619 kernel: raid6: avx512x2 gen() 41857 MB/s Apr 17 23:30:01.066487 kernel: raid6: avx512x1 gen() 29074 MB/s Apr 17 23:30:01.085461 kernel: raid6: avx2x4 gen() 20960 MB/s Apr 17 23:30:01.104407 kernel: raid6: avx2x2 gen() 20474 MB/s Apr 17 23:30:01.125051 kernel: raid6: avx2x1 gen() 20430 MB/s Apr 17 23:30:01.125182 kernel: raid6: using algorithm avx512x2 gen() 41857 MB/s Apr 17 23:30:01.145805 kernel: raid6: .... xor() 28945 MB/s, rmw enabled Apr 17 23:30:01.145944 kernel: raid6: using avx512x2 recovery algorithm Apr 17 23:30:01.167552 kernel: xor: automatically using best checksumming function avx Apr 17 23:30:01.337528 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:30:01.348598 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:30:01.361613 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:30:01.372488 systemd-udevd[412]: Using default interface naming scheme 'v255'. Apr 17 23:30:01.375240 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:30:01.396683 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:30:01.416553 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Apr 17 23:30:01.451460 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:30:01.465960 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:30:01.496952 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:30:01.508793 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:30:01.521466 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:30:01.534777 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:30:01.547975 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 17 23:30:01.548472 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:30:01.569587 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 17 23:30:01.562198 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:30:01.588444 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:30:01.588465 kernel: GPT:9289727 != 19775487 Apr 17 23:30:01.588472 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:30:01.588480 kernel: GPT:9289727 != 19775487 Apr 17 23:30:01.588486 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:30:01.588493 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:30:01.606180 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:30:01.610859 kernel: libata version 3.00 loaded. Apr 17 23:30:01.621424 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:30:01.625601 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:30:01.645088 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (460) Apr 17 23:30:01.657525 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 17 23:30:01.662112 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/vda3 scanned by (udev-worker) (462) Apr 17 23:30:01.662131 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:30:01.674367 kernel: AES CTR mode by8 optimization enabled Apr 17 23:30:01.674411 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 23:30:01.674544 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 23:30:01.677684 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 17 23:30:01.689130 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 17 23:30:01.689377 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 23:30:01.694424 kernel: scsi host0: ahci Apr 17 23:30:01.698369 kernel: scsi host1: ahci Apr 17 23:30:01.699698 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:30:01.707410 kernel: scsi host2: ahci Apr 17 23:30:01.707630 kernel: scsi host3: ahci Apr 17 23:30:01.714512 kernel: scsi host4: ahci Apr 17 23:30:01.715027 kernel: scsi host5: ahci Apr 17 23:30:01.715137 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 17 23:30:01.723531 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 17 23:30:01.723584 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 17 23:30:01.729954 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 17 23:30:01.730049 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 17 23:30:01.733762 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 17 23:30:01.742603 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 17 23:30:01.737633 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 17 23:30:01.769985 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:30:01.774780 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:30:01.774934 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:30:01.791411 disk-uuid[552]: Primary Header is updated. Apr 17 23:30:01.791411 disk-uuid[552]: Secondary Entries is updated. Apr 17 23:30:01.791411 disk-uuid[552]: Secondary Header is updated. Apr 17 23:30:01.784969 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:30:01.800501 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:30:01.800637 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:30:01.806738 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:30:01.815095 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:30:01.847281 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:30:02.050539 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 23:30:02.050724 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 23:30:02.052421 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 17 23:30:02.052467 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 23:30:02.054562 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 23:30:02.054666 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 17 23:30:02.054678 kernel: ata3.00: applying bridge limits Apr 17 23:30:02.054688 kernel: ata3.00: configured for UDMA/100 Apr 17 23:30:02.056615 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 17 23:30:02.059534 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 23:30:02.106588 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 17 23:30:02.106995 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 23:30:02.122469 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 17 23:30:02.156425 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:30:02.173740 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:30:02.190035 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:30:02.816468 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:30:02.817714 disk-uuid[553]: The operation has completed successfully. Apr 17 23:30:02.853747 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:30:02.853933 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:30:02.885829 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:30:02.894735 sh[593]: Success Apr 17 23:30:02.912361 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 17 23:30:02.945638 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:30:02.964817 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:30:02.974112 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:30:03.017074 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:30:03.017181 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:30:03.017198 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:30:03.022275 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:30:03.022399 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:30:03.030441 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:30:03.031583 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:30:03.053664 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:30:03.056021 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:30:03.070743 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:30:03.070776 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:30:03.070785 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:30:03.077934 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:30:03.085444 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:30:03.090475 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:30:03.097271 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:30:03.106541 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:30:03.158230 ignition[689]: Ignition 2.19.0 Apr 17 23:30:03.158262 ignition[689]: Stage: fetch-offline Apr 17 23:30:03.158286 ignition[689]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:30:03.158344 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:30:03.158414 ignition[689]: parsed url from cmdline: "" Apr 17 23:30:03.158416 ignition[689]: no config URL provided Apr 17 23:30:03.158420 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:30:03.158425 ignition[689]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:30:03.158444 ignition[689]: op(1): [started] loading QEMU firmware config module Apr 17 23:30:03.158447 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 17 23:30:03.172560 ignition[689]: op(1): [finished] loading QEMU firmware config module Apr 17 23:30:03.193390 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:30:03.202455 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:30:03.221126 systemd-networkd[783]: lo: Link UP Apr 17 23:30:03.221159 systemd-networkd[783]: lo: Gained carrier Apr 17 23:30:03.222182 systemd-networkd[783]: Enumeration completed Apr 17 23:30:03.222379 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:30:03.224756 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:30:03.224758 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:30:03.225613 systemd-networkd[783]: eth0: Link UP Apr 17 23:30:03.225616 systemd-networkd[783]: eth0: Gained carrier Apr 17 23:30:03.225621 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:30:03.226893 systemd[1]: Reached target network.target - Network. Apr 17 23:30:03.258386 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 23:30:03.418444 ignition[689]: parsing config with SHA512: 5316293e155e75e49953d23923091eb2edd85ba15fd150d12557c67991fd53140d57a430a9af16e878c5c028a3986889a2762780bc1ababb750460465f94f3da Apr 17 23:30:03.426707 unknown[689]: fetched base config from "system" Apr 17 23:30:03.426730 unknown[689]: fetched user config from "qemu" Apr 17 23:30:03.427106 ignition[689]: fetch-offline: fetch-offline passed Apr 17 23:30:03.427153 ignition[689]: Ignition finished successfully Apr 17 23:30:03.435425 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:30:03.437014 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 17 23:30:03.454527 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:30:03.472947 ignition[787]: Ignition 2.19.0 Apr 17 23:30:03.472975 ignition[787]: Stage: kargs Apr 17 23:30:03.473111 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:30:03.473118 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:30:03.473811 ignition[787]: kargs: kargs passed Apr 17 23:30:03.473879 ignition[787]: Ignition finished successfully Apr 17 23:30:03.482216 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:30:03.498588 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:30:03.514769 ignition[795]: Ignition 2.19.0 Apr 17 23:30:03.514793 ignition[795]: Stage: disks Apr 17 23:30:03.515075 ignition[795]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:30:03.515083 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:30:03.515770 ignition[795]: disks: disks passed Apr 17 23:30:03.515802 ignition[795]: Ignition finished successfully Apr 17 23:30:03.525513 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:30:03.526092 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:30:03.529785 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:30:03.542370 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:30:03.543279 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:30:03.548027 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:30:03.563727 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:30:03.582233 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:30:03.587089 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:30:03.599534 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:30:03.715549 kernel: EXT4-fs (vda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:30:03.715956 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:30:03.719150 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:30:03.744496 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:30:03.748074 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:30:03.764688 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (813) Apr 17 23:30:03.764712 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:30:03.764721 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:30:03.764729 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:30:03.756600 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:30:03.756632 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:30:03.756650 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:30:03.776683 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:30:03.796359 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:30:03.796534 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:30:03.804444 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:30:03.833006 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:30:03.838179 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:30:03.844021 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:30:03.849725 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:30:03.937885 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:30:03.955506 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:30:03.963510 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:30:03.977463 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:30:03.994235 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:30:04.006692 ignition[927]: INFO : Ignition 2.19.0 Apr 17 23:30:04.006692 ignition[927]: INFO : Stage: mount Apr 17 23:30:04.010558 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:30:04.010558 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:30:04.010558 ignition[927]: INFO : mount: mount passed Apr 17 23:30:04.010558 ignition[927]: INFO : Ignition finished successfully Apr 17 23:30:04.009228 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:30:04.012566 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:30:04.032510 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:30:04.040558 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:30:04.057648 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Apr 17 23:30:04.057697 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:30:04.060219 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:30:04.062016 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:30:04.067366 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:30:04.068999 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:30:04.100893 ignition[957]: INFO : Ignition 2.19.0 Apr 17 23:30:04.100893 ignition[957]: INFO : Stage: files Apr 17 23:30:04.105350 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:30:04.105350 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:30:04.105350 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:30:04.105350 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:30:04.105350 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:30:04.122888 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:30:04.122888 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:30:04.122888 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:30:04.122888 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:30:04.122888 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:30:04.109655 unknown[957]: wrote ssh authorized keys file for user: core Apr 17 23:30:04.180233 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:30:04.264362 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:30:04.264362 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:30:04.274046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:30:04.274046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:30:04.274046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:30:04.274046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:30:04.274046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:30:04.274046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:30:04.274046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:30:04.274046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:30:04.274046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:30:04.274046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:30:04.274046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:30:04.274046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:30:04.274046 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 17 23:30:04.579197 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:30:04.942062 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 17 23:30:04.942062 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:30:04.950699 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:30:04.955731 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:30:04.955731 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:30:04.955731 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 17 23:30:04.955731 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 23:30:04.972065 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 23:30:04.972065 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 17 23:30:04.972065 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 17 23:30:05.008035 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 23:30:05.012504 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 23:30:05.016420 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 17 23:30:05.016420 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:30:05.016420 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:30:05.016420 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:30:05.016420 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:30:05.016420 ignition[957]: INFO : files: files passed Apr 17 23:30:05.016420 ignition[957]: INFO : Ignition finished successfully Apr 17 23:30:05.040584 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:30:05.040618 systemd-networkd[783]: eth0: Gained IPv6LL Apr 17 23:30:05.050588 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:30:05.055785 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:30:05.065202 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:30:05.065345 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:30:05.074111 initrd-setup-root-after-ignition[983]: grep: /sysroot/oem/oem-release: No such file or directory Apr 17 23:30:05.080371 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:30:05.080371 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:30:05.091925 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:30:05.082743 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:30:05.085475 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:30:05.116711 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:30:05.145426 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:30:05.145546 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:30:05.147517 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:30:05.155062 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:30:05.159071 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:30:05.176484 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:30:05.193643 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:30:05.212812 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:30:05.224175 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:30:05.225243 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:30:05.231253 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:30:05.236955 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:30:05.237158 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:30:05.245007 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:30:05.250188 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:30:05.251988 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:30:05.257222 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:30:05.262926 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:30:05.268132 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:30:05.275267 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:30:05.277123 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:30:05.288519 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:30:05.289964 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:30:05.294199 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:30:05.294382 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:30:05.301972 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:30:05.303335 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:30:05.310252 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:30:05.310464 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:30:05.315828 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:30:05.315968 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:30:05.325972 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:30:05.326096 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:30:05.327431 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:30:05.334058 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:30:05.335015 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:30:05.339078 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:30:05.344953 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:30:05.349072 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:30:05.349144 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:30:05.354191 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:30:05.354286 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:30:05.360131 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:30:05.360230 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:30:05.364075 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:30:05.364177 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:30:05.393799 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:30:05.394920 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:30:05.395023 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:30:05.400960 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:30:05.412454 ignition[1011]: INFO : Ignition 2.19.0 Apr 17 23:30:05.412454 ignition[1011]: INFO : Stage: umount Apr 17 23:30:05.412454 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:30:05.412454 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:30:05.412454 ignition[1011]: INFO : umount: umount passed Apr 17 23:30:05.412454 ignition[1011]: INFO : Ignition finished successfully Apr 17 23:30:05.406638 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:30:05.411828 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:30:05.426732 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:30:05.426835 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:30:05.437461 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:30:05.438211 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:30:05.438373 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:30:05.441224 systemd[1]: Stopped target network.target - Network. Apr 17 23:30:05.444061 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:30:05.444143 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:30:05.450545 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:30:05.450615 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:30:05.454977 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:30:05.455037 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:30:05.456345 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:30:05.456404 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:30:05.464394 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:30:05.469260 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:30:05.474823 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:30:05.474953 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:30:05.482933 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:30:05.483000 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:30:05.487802 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:30:05.487929 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:30:05.491442 systemd-networkd[783]: eth0: DHCPv6 lease lost Apr 17 23:30:05.494475 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:30:05.494513 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:30:05.495930 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:30:05.495962 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:30:05.502621 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:30:05.502718 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:30:05.507822 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:30:05.507894 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:30:05.534547 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:30:05.534897 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:30:05.534936 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:30:05.544118 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:30:05.544200 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:30:05.545786 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:30:05.545815 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:30:05.552476 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:30:05.580565 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:30:05.580632 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:30:05.592919 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:30:05.593060 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:30:05.594519 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:30:05.594549 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:30:05.602270 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:30:05.602390 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:30:05.607335 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:30:05.607372 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:30:05.623583 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:30:05.623651 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:30:05.630587 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:30:05.630640 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:30:05.648485 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:30:05.649112 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:30:05.649153 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:30:05.659825 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:30:05.659950 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:30:05.676270 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:30:05.676450 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:30:05.678574 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:30:05.690247 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:30:05.706669 systemd[1]: Switching root. Apr 17 23:30:05.735894 systemd-journald[194]: Journal stopped Apr 17 23:30:06.654461 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 17 23:30:06.654513 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:30:06.654527 kernel: SELinux: policy capability open_perms=1 Apr 17 23:30:06.654535 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:30:06.654543 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:30:06.654551 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:30:06.654558 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:30:06.654568 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:30:06.654576 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:30:06.654584 kernel: audit: type=1403 audit(1776468605.849:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:30:06.654592 systemd[1]: Successfully loaded SELinux policy in 38.284ms. Apr 17 23:30:06.654612 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.956ms. Apr 17 23:30:06.654622 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:30:06.654632 systemd[1]: Detected virtualization kvm. Apr 17 23:30:06.654640 systemd[1]: Detected architecture x86-64. Apr 17 23:30:06.654652 systemd[1]: Detected first boot. Apr 17 23:30:06.654660 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:30:06.654668 zram_generator::config[1055]: No configuration found. Apr 17 23:30:06.654676 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:30:06.654684 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:30:06.654692 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:30:06.654700 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:30:06.654711 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:30:06.654721 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:30:06.654728 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:30:06.654736 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:30:06.654745 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:30:06.654753 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:30:06.654760 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:30:06.654768 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:30:06.654776 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:30:06.654785 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:30:06.654793 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:30:06.654800 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:30:06.654809 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:30:06.654817 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:30:06.654825 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:30:06.654832 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:30:06.654840 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:30:06.654848 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:30:06.654886 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:30:06.654895 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:30:06.654903 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:30:06.654914 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:30:06.654922 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:30:06.654931 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:30:06.654938 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:30:06.654946 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:30:06.654955 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:30:06.654963 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:30:06.654971 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:30:06.654979 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:30:06.654987 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:30:06.654994 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:30:06.655002 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:30:06.655010 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:30:06.655018 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:30:06.655027 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:30:06.655035 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:30:06.655043 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:30:06.655050 systemd[1]: Reached target machines.target - Containers. Apr 17 23:30:06.655058 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:30:06.655066 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:30:06.655073 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:30:06.655082 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:30:06.655092 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:30:06.655099 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:30:06.655107 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:30:06.655115 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:30:06.655122 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:30:06.655131 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:30:06.655138 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:30:06.655146 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:30:06.655153 kernel: loop: module loaded Apr 17 23:30:06.655162 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:30:06.655170 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:30:06.655177 kernel: fuse: init (API version 7.39) Apr 17 23:30:06.655185 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:30:06.655193 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:30:06.655201 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:30:06.655208 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:30:06.655227 systemd-journald[1125]: Collecting audit messages is disabled. Apr 17 23:30:06.655245 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:30:06.655253 kernel: ACPI: bus type drm_connector registered Apr 17 23:30:06.655262 systemd-journald[1125]: Journal started Apr 17 23:30:06.655279 systemd-journald[1125]: Runtime Journal (/run/log/journal/70d65b24c204425a8e7c59e76a38841c) is 6.0M, max 48.4M, 42.3M free. Apr 17 23:30:06.222891 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:30:06.240827 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 17 23:30:06.241215 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:30:06.664370 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:30:06.664468 systemd[1]: Stopped verity-setup.service. Apr 17 23:30:06.671388 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:30:06.675618 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:30:06.677830 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:30:06.680755 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:30:06.684032 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:30:06.686730 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:30:06.689681 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:30:06.692604 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:30:06.695334 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:30:06.699061 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:30:06.702751 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:30:06.702955 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:30:06.706172 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:30:06.706425 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:30:06.709749 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:30:06.709928 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:30:06.713196 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:30:06.713403 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:30:06.716993 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:30:06.717133 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:30:06.720068 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:30:06.720203 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:30:06.723245 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:30:06.726591 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:30:06.730954 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:30:06.734651 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:30:06.746216 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:30:06.762661 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:30:06.767520 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:30:06.771213 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:30:06.771263 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:30:06.775133 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:30:06.782732 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:30:06.786998 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:30:06.790643 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:30:06.792834 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:30:06.797502 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:30:06.801170 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:30:06.802204 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:30:06.805671 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:30:06.811280 systemd-journald[1125]: Time spent on flushing to /var/log/journal/70d65b24c204425a8e7c59e76a38841c is 31.759ms for 947 entries. Apr 17 23:30:06.811280 systemd-journald[1125]: System Journal (/var/log/journal/70d65b24c204425a8e7c59e76a38841c) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:30:06.854030 systemd-journald[1125]: Received client request to flush runtime journal. Apr 17 23:30:06.854056 kernel: loop0: detected capacity change from 0 to 142488 Apr 17 23:30:06.807523 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:30:06.814485 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:30:06.823512 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:30:06.829448 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:30:06.833965 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:30:06.839404 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:30:06.847477 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:30:06.852376 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:30:06.857960 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:30:06.862237 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:30:06.869366 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:30:06.871227 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:30:06.876472 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:30:06.892626 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:30:06.898043 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:30:06.903538 udevadm[1173]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 17 23:30:06.911396 kernel: loop1: detected capacity change from 0 to 217752 Apr 17 23:30:06.914629 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:30:06.915457 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:30:06.920743 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Apr 17 23:30:06.920756 systemd-tmpfiles[1188]: ACLs are not supported, ignoring. Apr 17 23:30:06.928041 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:30:06.960444 kernel: loop2: detected capacity change from 0 to 140768 Apr 17 23:30:06.997343 kernel: loop3: detected capacity change from 0 to 142488 Apr 17 23:30:07.014774 kernel: loop4: detected capacity change from 0 to 217752 Apr 17 23:30:07.027363 kernel: loop5: detected capacity change from 0 to 140768 Apr 17 23:30:07.040271 (sd-merge)[1194]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 17 23:30:07.040641 (sd-merge)[1194]: Merged extensions into '/usr'. Apr 17 23:30:07.043726 systemd[1]: Reloading requested from client PID 1170 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:30:07.043755 systemd[1]: Reloading... Apr 17 23:30:07.090431 zram_generator::config[1217]: No configuration found. Apr 17 23:30:07.118441 ldconfig[1165]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:30:07.188214 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:30:07.220766 systemd[1]: Reloading finished in 176 ms. Apr 17 23:30:07.247203 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:30:07.250789 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:30:07.254561 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:30:07.272626 systemd[1]: Starting ensure-sysext.service... Apr 17 23:30:07.276016 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:30:07.280083 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:30:07.286187 systemd[1]: Reloading requested from client PID 1258 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:30:07.286217 systemd[1]: Reloading... Apr 17 23:30:07.292283 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:30:07.292840 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:30:07.293445 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:30:07.293662 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Apr 17 23:30:07.293723 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Apr 17 23:30:07.295753 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:30:07.295818 systemd-tmpfiles[1261]: Skipping /boot Apr 17 23:30:07.301174 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:30:07.301238 systemd-tmpfiles[1261]: Skipping /boot Apr 17 23:30:07.302706 systemd-udevd[1262]: Using default interface naming scheme 'v255'. Apr 17 23:30:07.318917 zram_generator::config[1288]: No configuration found. Apr 17 23:30:07.363448 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1301) Apr 17 23:30:07.396564 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 17 23:30:07.410428 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:30:07.417897 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:30:07.438678 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 17 23:30:07.441429 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 17 23:30:07.446814 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 17 23:30:07.446960 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 17 23:30:07.459373 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:30:07.479283 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 23:30:07.479464 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:30:07.482836 systemd[1]: Reloading finished in 196 ms. Apr 17 23:30:07.512651 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:30:07.530421 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:30:07.646559 systemd[1]: Finished ensure-sysext.service. Apr 17 23:30:07.670855 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:30:07.682549 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:30:07.687075 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:30:07.690407 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:30:07.693219 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:30:07.698496 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:30:07.702131 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:30:07.707920 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:30:07.711283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:30:07.713142 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:30:07.719029 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:30:07.726614 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:30:07.732789 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:30:07.739110 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 23:30:07.744438 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:30:07.746614 augenrules[1383]: No rules Apr 17 23:30:07.748825 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:30:07.751602 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:30:07.752668 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:30:07.756232 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:30:07.759500 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:30:07.759630 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:30:07.762944 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:30:07.766423 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:30:07.766562 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:30:07.769478 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:30:07.769613 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:30:07.773002 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:30:07.773180 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:30:07.776381 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:30:07.779976 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:30:07.795570 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:30:07.797203 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:30:07.797278 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:30:07.798704 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:30:07.803705 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:30:07.805141 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:30:07.805818 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:30:07.812624 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:30:07.815505 lvm[1402]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:30:07.836754 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:30:07.882405 systemd-networkd[1380]: lo: Link UP Apr 17 23:30:07.882439 systemd-networkd[1380]: lo: Gained carrier Apr 17 23:30:07.883955 systemd-networkd[1380]: Enumeration completed Apr 17 23:30:07.884894 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:30:07.884922 systemd-networkd[1380]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:30:07.885822 systemd-networkd[1380]: eth0: Link UP Apr 17 23:30:07.885827 systemd-networkd[1380]: eth0: Gained carrier Apr 17 23:30:07.885836 systemd-networkd[1380]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:30:07.886186 systemd-resolved[1382]: Positive Trust Anchors: Apr 17 23:30:07.886193 systemd-resolved[1382]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:30:07.886218 systemd-resolved[1382]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:30:07.889580 systemd-resolved[1382]: Defaulting to hostname 'linux'. Apr 17 23:30:07.901791 systemd-networkd[1380]: eth0: DHCPv4 address 10.0.0.28/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 23:30:07.902680 systemd-timesyncd[1386]: Network configuration changed, trying to establish connection. Apr 17 23:30:07.903268 systemd-timesyncd[1386]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 17 23:30:07.903383 systemd-timesyncd[1386]: Initial clock synchronization to Fri 2026-04-17 23:30:07.931833 UTC. Apr 17 23:30:08.017774 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 23:30:08.021139 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:30:08.024140 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:30:08.027516 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:30:08.030902 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:30:08.036358 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:30:08.039670 systemd[1]: Reached target network.target - Network. Apr 17 23:30:08.042128 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:30:08.045353 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:30:08.048266 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:30:08.051593 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:30:08.054836 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:30:08.058146 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:30:08.058189 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:30:08.060516 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:30:08.063266 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:30:08.066161 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:30:08.069438 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:30:08.072775 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:30:08.076929 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:30:08.099937 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:30:08.104516 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:30:08.110047 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:30:08.113883 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:30:08.113966 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:30:08.117117 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:30:08.120004 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:30:08.122894 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:30:08.122944 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:30:08.124505 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:30:08.128534 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:30:08.132422 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:30:08.136356 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:30:08.139084 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:30:08.140424 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:30:08.144646 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:30:08.147607 jq[1426]: false Apr 17 23:30:08.148472 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:30:08.155244 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:30:08.161677 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:30:08.165750 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:30:08.166121 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:30:08.167406 extend-filesystems[1427]: Found loop3 Apr 17 23:30:08.172486 extend-filesystems[1427]: Found loop4 Apr 17 23:30:08.172486 extend-filesystems[1427]: Found loop5 Apr 17 23:30:08.172486 extend-filesystems[1427]: Found sr0 Apr 17 23:30:08.172486 extend-filesystems[1427]: Found vda Apr 17 23:30:08.172486 extend-filesystems[1427]: Found vda1 Apr 17 23:30:08.172486 extend-filesystems[1427]: Found vda2 Apr 17 23:30:08.172486 extend-filesystems[1427]: Found vda3 Apr 17 23:30:08.172486 extend-filesystems[1427]: Found usr Apr 17 23:30:08.172486 extend-filesystems[1427]: Found vda4 Apr 17 23:30:08.172486 extend-filesystems[1427]: Found vda6 Apr 17 23:30:08.172486 extend-filesystems[1427]: Found vda7 Apr 17 23:30:08.172486 extend-filesystems[1427]: Found vda9 Apr 17 23:30:08.172486 extend-filesystems[1427]: Checking size of /dev/vda9 Apr 17 23:30:08.254172 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 17 23:30:08.254194 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1312) Apr 17 23:30:08.254204 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 17 23:30:08.190770 dbus-daemon[1425]: [system] SELinux support is enabled Apr 17 23:30:08.254506 extend-filesystems[1427]: Resized partition /dev/vda9 Apr 17 23:30:08.172551 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:30:08.260808 extend-filesystems[1446]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:30:08.260808 extend-filesystems[1446]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 17 23:30:08.260808 extend-filesystems[1446]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 17 23:30:08.260808 extend-filesystems[1446]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 17 23:30:08.194523 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:30:08.278713 jq[1448]: true Apr 17 23:30:08.278841 extend-filesystems[1427]: Resized filesystem in /dev/vda9 Apr 17 23:30:08.200109 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:30:08.285122 update_engine[1439]: I20260417 23:30:08.227926 1439 main.cc:92] Flatcar Update Engine starting Apr 17 23:30:08.285122 update_engine[1439]: I20260417 23:30:08.230720 1439 update_check_scheduler.cc:74] Next update check in 7m18s Apr 17 23:30:08.205640 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:30:08.213080 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:30:08.285637 tar[1451]: linux-amd64/LICENSE Apr 17 23:30:08.285637 tar[1451]: linux-amd64/helm Apr 17 23:30:08.213446 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:30:08.285824 jq[1453]: true Apr 17 23:30:08.213652 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:30:08.213923 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:30:08.219713 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:30:08.219966 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:30:08.246024 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:30:08.246523 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:30:08.256227 systemd-logind[1438]: Watching system buttons on /dev/input/event1 (Power Button) Apr 17 23:30:08.256239 systemd-logind[1438]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:30:08.258698 systemd-logind[1438]: New seat seat0. Apr 17 23:30:08.261276 (ntainerd)[1455]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:30:08.269138 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:30:08.280981 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:30:08.296944 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:30:08.297095 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:30:08.300581 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:30:08.300734 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:30:08.315233 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:30:08.344141 locksmithd[1467]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:30:08.351915 bash[1481]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:30:08.356778 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:30:08.361150 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 17 23:30:08.399147 sshd_keygen[1447]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:30:08.423873 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:30:08.434577 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:30:08.443852 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:30:08.444026 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:30:08.452949 containerd[1455]: time="2026-04-17T23:30:08.452872140Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:30:08.454613 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:30:08.464937 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:30:08.473056 containerd[1455]: time="2026-04-17T23:30:08.472988672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:30:08.473796 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:30:08.475895 containerd[1455]: time="2026-04-17T23:30:08.475839849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:30:08.475895 containerd[1455]: time="2026-04-17T23:30:08.475889155Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:30:08.475965 containerd[1455]: time="2026-04-17T23:30:08.475901403Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:30:08.476040 containerd[1455]: time="2026-04-17T23:30:08.476011833Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:30:08.476056 containerd[1455]: time="2026-04-17T23:30:08.476044583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:30:08.476110 containerd[1455]: time="2026-04-17T23:30:08.476083482Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:30:08.476127 containerd[1455]: time="2026-04-17T23:30:08.476110788Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:30:08.476250 containerd[1455]: time="2026-04-17T23:30:08.476220960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:30:08.476268 containerd[1455]: time="2026-04-17T23:30:08.476250870Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:30:08.476268 containerd[1455]: time="2026-04-17T23:30:08.476260733Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:30:08.476338 containerd[1455]: time="2026-04-17T23:30:08.476267529Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:30:08.476948 containerd[1455]: time="2026-04-17T23:30:08.476365059Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:30:08.476948 containerd[1455]: time="2026-04-17T23:30:08.476498207Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:30:08.476948 containerd[1455]: time="2026-04-17T23:30:08.476569994Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:30:08.476948 containerd[1455]: time="2026-04-17T23:30:08.476578478Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:30:08.476948 containerd[1455]: time="2026-04-17T23:30:08.476622266Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:30:08.476948 containerd[1455]: time="2026-04-17T23:30:08.476649073Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:30:08.478125 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:30:08.481660 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:30:08.488134 containerd[1455]: time="2026-04-17T23:30:08.488083982Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:30:08.488175 containerd[1455]: time="2026-04-17T23:30:08.488143578Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:30:08.488175 containerd[1455]: time="2026-04-17T23:30:08.488161754Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:30:08.488206 containerd[1455]: time="2026-04-17T23:30:08.488175205Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:30:08.488206 containerd[1455]: time="2026-04-17T23:30:08.488187176Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:30:08.488369 containerd[1455]: time="2026-04-17T23:30:08.488287306Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:30:08.488625 containerd[1455]: time="2026-04-17T23:30:08.488597673Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:30:08.488734 containerd[1455]: time="2026-04-17T23:30:08.488693627Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:30:08.488734 containerd[1455]: time="2026-04-17T23:30:08.488728810Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:30:08.488786 containerd[1455]: time="2026-04-17T23:30:08.488770547Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:30:08.488800 containerd[1455]: time="2026-04-17T23:30:08.488790710Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:30:08.488813 containerd[1455]: time="2026-04-17T23:30:08.488801505Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:30:08.488813 containerd[1455]: time="2026-04-17T23:30:08.488810283Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:30:08.488846 containerd[1455]: time="2026-04-17T23:30:08.488820102Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:30:08.488846 containerd[1455]: time="2026-04-17T23:30:08.488832601Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:30:08.488846 containerd[1455]: time="2026-04-17T23:30:08.488842365Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:30:08.488882 containerd[1455]: time="2026-04-17T23:30:08.488851681Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:30:08.488882 containerd[1455]: time="2026-04-17T23:30:08.488860551Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:30:08.488882 containerd[1455]: time="2026-04-17T23:30:08.488876108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:30:08.488920 containerd[1455]: time="2026-04-17T23:30:08.488886407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:30:08.488920 containerd[1455]: time="2026-04-17T23:30:08.488895402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:30:08.488948 containerd[1455]: time="2026-04-17T23:30:08.488923635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:30:08.488948 containerd[1455]: time="2026-04-17T23:30:08.488933400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:30:08.488948 containerd[1455]: time="2026-04-17T23:30:08.488943362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:30:08.488987 containerd[1455]: time="2026-04-17T23:30:08.488951826Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:30:08.488987 containerd[1455]: time="2026-04-17T23:30:08.488961030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:30:08.488987 containerd[1455]: time="2026-04-17T23:30:08.488969820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:30:08.488987 containerd[1455]: time="2026-04-17T23:30:08.488979450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:30:08.489035 containerd[1455]: time="2026-04-17T23:30:08.488987490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:30:08.489035 containerd[1455]: time="2026-04-17T23:30:08.488995664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:30:08.489035 containerd[1455]: time="2026-04-17T23:30:08.489004187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:30:08.489035 containerd[1455]: time="2026-04-17T23:30:08.489014292Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:30:08.489035 containerd[1455]: time="2026-04-17T23:30:08.489031271Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:30:08.489095 containerd[1455]: time="2026-04-17T23:30:08.489040110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:30:08.489095 containerd[1455]: time="2026-04-17T23:30:08.489047923Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:30:08.489095 containerd[1455]: time="2026-04-17T23:30:08.489081273Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:30:08.489133 containerd[1455]: time="2026-04-17T23:30:08.489093984Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:30:08.489133 containerd[1455]: time="2026-04-17T23:30:08.489102767Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:30:08.489133 containerd[1455]: time="2026-04-17T23:30:08.489111211Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:30:08.489133 containerd[1455]: time="2026-04-17T23:30:08.489117876Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:30:08.489133 containerd[1455]: time="2026-04-17T23:30:08.489126705Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:30:08.489195 containerd[1455]: time="2026-04-17T23:30:08.489137324Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:30:08.489195 containerd[1455]: time="2026-04-17T23:30:08.489144292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:30:08.489437 containerd[1455]: time="2026-04-17T23:30:08.489386656Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:30:08.489569 containerd[1455]: time="2026-04-17T23:30:08.489445060Z" level=info msg="Connect containerd service" Apr 17 23:30:08.489569 containerd[1455]: time="2026-04-17T23:30:08.489475130Z" level=info msg="using legacy CRI server" Apr 17 23:30:08.489569 containerd[1455]: time="2026-04-17T23:30:08.489480539Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:30:08.489614 containerd[1455]: time="2026-04-17T23:30:08.489571097Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:30:08.490153 containerd[1455]: time="2026-04-17T23:30:08.490109589Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:30:08.490390 containerd[1455]: time="2026-04-17T23:30:08.490287799Z" level=info msg="Start subscribing containerd event" Apr 17 23:30:08.490390 containerd[1455]: time="2026-04-17T23:30:08.490381306Z" level=info msg="Start recovering state" Apr 17 23:30:08.490448 containerd[1455]: time="2026-04-17T23:30:08.490423859Z" level=info msg="Start event monitor" Apr 17 23:30:08.490464 containerd[1455]: time="2026-04-17T23:30:08.490453531Z" level=info msg="Start snapshots syncer" Apr 17 23:30:08.490464 containerd[1455]: time="2026-04-17T23:30:08.490461068Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:30:08.490495 containerd[1455]: time="2026-04-17T23:30:08.490466215Z" level=info msg="Start streaming server" Apr 17 23:30:08.490852 containerd[1455]: time="2026-04-17T23:30:08.490822817Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:30:08.490903 containerd[1455]: time="2026-04-17T23:30:08.490878147Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:30:08.492830 containerd[1455]: time="2026-04-17T23:30:08.491942862Z" level=info msg="containerd successfully booted in 0.041122s" Apr 17 23:30:08.492517 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:30:08.698263 tar[1451]: linux-amd64/README.md Apr 17 23:30:08.716209 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:30:09.328696 systemd-networkd[1380]: eth0: Gained IPv6LL Apr 17 23:30:09.331943 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:30:09.336088 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:30:09.350575 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 17 23:30:09.355455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:30:09.359778 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:30:09.377498 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 17 23:30:09.377675 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 17 23:30:09.381874 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:30:09.385703 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:30:10.104892 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:30:10.108753 (kubelet)[1536]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:30:10.108776 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:30:10.112033 systemd[1]: Startup finished in 2.027s (kernel) + 6.038s (initrd) + 4.300s (userspace) = 12.365s. Apr 17 23:30:10.550901 kubelet[1536]: E0417 23:30:10.550741 1536 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:30:10.554796 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:30:10.555007 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:30:13.573770 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:30:13.574969 systemd[1]: Started sshd@0-10.0.0.28:22-10.0.0.1:51926.service - OpenSSH per-connection server daemon (10.0.0.1:51926). Apr 17 23:30:13.638070 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 51926 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:30:13.640184 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:30:13.651447 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:30:13.663687 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:30:13.666847 systemd-logind[1438]: New session 1 of user core. Apr 17 23:30:13.675467 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:30:13.677869 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:30:13.684737 (systemd)[1554]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:30:13.763897 systemd[1554]: Queued start job for default target default.target. Apr 17 23:30:13.772098 systemd[1554]: Created slice app.slice - User Application Slice. Apr 17 23:30:13.772154 systemd[1554]: Reached target paths.target - Paths. Apr 17 23:30:13.772165 systemd[1554]: Reached target timers.target - Timers. Apr 17 23:30:13.773552 systemd[1554]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:30:13.783836 systemd[1554]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:30:13.783907 systemd[1554]: Reached target sockets.target - Sockets. Apr 17 23:30:13.783917 systemd[1554]: Reached target basic.target - Basic System. Apr 17 23:30:13.783945 systemd[1554]: Reached target default.target - Main User Target. Apr 17 23:30:13.783967 systemd[1554]: Startup finished in 92ms. Apr 17 23:30:13.784393 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:30:13.785727 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:30:13.852532 systemd[1]: Started sshd@1-10.0.0.28:22-10.0.0.1:51942.service - OpenSSH per-connection server daemon (10.0.0.1:51942). Apr 17 23:30:13.907363 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 51942 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:30:13.908671 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:30:13.915895 systemd-logind[1438]: New session 2 of user core. Apr 17 23:30:13.925593 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:30:13.985802 sshd[1565]: pam_unix(sshd:session): session closed for user core Apr 17 23:30:13.996985 systemd[1]: sshd@1-10.0.0.28:22-10.0.0.1:51942.service: Deactivated successfully. Apr 17 23:30:13.999246 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:30:14.000830 systemd-logind[1438]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:30:14.002361 systemd[1]: Started sshd@2-10.0.0.28:22-10.0.0.1:51950.service - OpenSSH per-connection server daemon (10.0.0.1:51950). Apr 17 23:30:14.003497 systemd-logind[1438]: Removed session 2. Apr 17 23:30:14.041849 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 51950 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:30:14.043114 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:30:14.048190 systemd-logind[1438]: New session 3 of user core. Apr 17 23:30:14.064141 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:30:14.122920 sshd[1572]: pam_unix(sshd:session): session closed for user core Apr 17 23:30:14.136023 systemd[1]: sshd@2-10.0.0.28:22-10.0.0.1:51950.service: Deactivated successfully. Apr 17 23:30:14.137625 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:30:14.138832 systemd-logind[1438]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:30:14.146101 systemd[1]: Started sshd@3-10.0.0.28:22-10.0.0.1:51958.service - OpenSSH per-connection server daemon (10.0.0.1:51958). Apr 17 23:30:14.147179 systemd-logind[1438]: Removed session 3. Apr 17 23:30:14.178509 sshd[1579]: Accepted publickey for core from 10.0.0.1 port 51958 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:30:14.181201 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:30:14.186434 systemd-logind[1438]: New session 4 of user core. Apr 17 23:30:14.197095 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:30:14.257870 sshd[1579]: pam_unix(sshd:session): session closed for user core Apr 17 23:30:14.271212 systemd[1]: sshd@3-10.0.0.28:22-10.0.0.1:51958.service: Deactivated successfully. Apr 17 23:30:14.273023 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:30:14.274646 systemd-logind[1438]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:30:14.284678 systemd[1]: Started sshd@4-10.0.0.28:22-10.0.0.1:51964.service - OpenSSH per-connection server daemon (10.0.0.1:51964). Apr 17 23:30:14.285777 systemd-logind[1438]: Removed session 4. Apr 17 23:30:14.320887 sshd[1586]: Accepted publickey for core from 10.0.0.1 port 51964 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:30:14.322424 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:30:14.327344 systemd-logind[1438]: New session 5 of user core. Apr 17 23:30:14.339919 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:30:14.403622 sudo[1589]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:30:14.403846 sudo[1589]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:30:14.425761 sudo[1589]: pam_unix(sudo:session): session closed for user root Apr 17 23:30:14.427723 sshd[1586]: pam_unix(sshd:session): session closed for user core Apr 17 23:30:14.441780 systemd[1]: sshd@4-10.0.0.28:22-10.0.0.1:51964.service: Deactivated successfully. Apr 17 23:30:14.443968 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:30:14.445451 systemd-logind[1438]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:30:14.458064 systemd[1]: Started sshd@5-10.0.0.28:22-10.0.0.1:51968.service - OpenSSH per-connection server daemon (10.0.0.1:51968). Apr 17 23:30:14.459520 systemd-logind[1438]: Removed session 5. Apr 17 23:30:14.492005 sshd[1594]: Accepted publickey for core from 10.0.0.1 port 51968 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:30:14.494806 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:30:14.499572 systemd-logind[1438]: New session 6 of user core. Apr 17 23:30:14.515182 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:30:14.573978 sudo[1598]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:30:14.574204 sudo[1598]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:30:14.578494 sudo[1598]: pam_unix(sudo:session): session closed for user root Apr 17 23:30:14.584688 sudo[1597]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:30:14.585014 sudo[1597]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:30:14.609708 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:30:14.611985 auditctl[1601]: No rules Apr 17 23:30:14.612233 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:30:14.612471 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:30:14.614512 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:30:14.660164 augenrules[1619]: No rules Apr 17 23:30:14.662975 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:30:14.664360 sudo[1597]: pam_unix(sudo:session): session closed for user root Apr 17 23:30:14.665856 sshd[1594]: pam_unix(sshd:session): session closed for user core Apr 17 23:30:14.676477 systemd[1]: sshd@5-10.0.0.28:22-10.0.0.1:51968.service: Deactivated successfully. Apr 17 23:30:14.677733 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:30:14.679003 systemd-logind[1438]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:30:14.680028 systemd[1]: Started sshd@6-10.0.0.28:22-10.0.0.1:51976.service - OpenSSH per-connection server daemon (10.0.0.1:51976). Apr 17 23:30:14.680757 systemd-logind[1438]: Removed session 6. Apr 17 23:30:14.712872 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 51976 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:30:14.713820 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:30:14.719769 systemd-logind[1438]: New session 7 of user core. Apr 17 23:30:14.733605 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:30:14.786457 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:30:14.786673 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:30:15.058886 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:30:15.059030 (dockerd)[1648]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:30:15.343059 dockerd[1648]: time="2026-04-17T23:30:15.342829924Z" level=info msg="Starting up" Apr 17 23:30:15.516734 dockerd[1648]: time="2026-04-17T23:30:15.516661006Z" level=info msg="Loading containers: start." Apr 17 23:30:15.661380 kernel: Initializing XFRM netlink socket Apr 17 23:30:15.762115 systemd-networkd[1380]: docker0: Link UP Apr 17 23:30:15.785180 dockerd[1648]: time="2026-04-17T23:30:15.785110692Z" level=info msg="Loading containers: done." Apr 17 23:30:15.800013 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2509740552-merged.mount: Deactivated successfully. Apr 17 23:30:15.801208 dockerd[1648]: time="2026-04-17T23:30:15.801118058Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:30:15.801376 dockerd[1648]: time="2026-04-17T23:30:15.801356660Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:30:15.801568 dockerd[1648]: time="2026-04-17T23:30:15.801504222Z" level=info msg="Daemon has completed initialization" Apr 17 23:30:15.841026 dockerd[1648]: time="2026-04-17T23:30:15.840948650Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:30:15.841194 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:30:16.297422 containerd[1455]: time="2026-04-17T23:30:16.297377721Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\"" Apr 17 23:30:17.103811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount864738133.mount: Deactivated successfully. Apr 17 23:30:17.883160 containerd[1455]: time="2026-04-17T23:30:17.883053984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:17.886016 containerd[1455]: time="2026-04-17T23:30:17.884113524Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.4: active requests=0, bytes read=27578861" Apr 17 23:30:17.886016 containerd[1455]: time="2026-04-17T23:30:17.885950587Z" level=info msg="ImageCreate event name:\"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:17.888704 containerd[1455]: time="2026-04-17T23:30:17.888652582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:17.890471 containerd[1455]: time="2026-04-17T23:30:17.890400459Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.4\" with image id \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:06b4bb208634a107ab9e6c50cdb9df178d05166a700c0cc448d59522091074b5\", size \"27576022\" in 1.592559847s" Apr 17 23:30:17.890504 containerd[1455]: time="2026-04-17T23:30:17.890471864Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.4\" returns image reference \"sha256:840f22aa169cc9a11114a874832f60c2d4a4f7767d107303cd1ca6d9c228ee8b\"" Apr 17 23:30:17.891206 containerd[1455]: time="2026-04-17T23:30:17.891135932Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\"" Apr 17 23:30:18.746648 containerd[1455]: time="2026-04-17T23:30:18.746560030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:18.747681 containerd[1455]: time="2026-04-17T23:30:18.747640031Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.4: active requests=0, bytes read=21451591" Apr 17 23:30:18.748880 containerd[1455]: time="2026-04-17T23:30:18.748833464Z" level=info msg="ImageCreate event name:\"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:18.751222 containerd[1455]: time="2026-04-17T23:30:18.751152182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:18.752147 containerd[1455]: time="2026-04-17T23:30:18.752103078Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.4\" with image id \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7b036c805d57f203e9efaf43672cff6019b9083a9c0eb107ea8500eace29d8fd\", size \"23018006\" in 860.923551ms" Apr 17 23:30:18.752147 containerd[1455]: time="2026-04-17T23:30:18.752144878Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.4\" returns image reference \"sha256:96ce7469899d4d3ccad56b1a80b91609cb2203287112d73818296004948bb667\"" Apr 17 23:30:18.752953 containerd[1455]: time="2026-04-17T23:30:18.752901083Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\"" Apr 17 23:30:19.416503 containerd[1455]: time="2026-04-17T23:30:19.416398275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:19.417434 containerd[1455]: time="2026-04-17T23:30:19.417176420Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.4: active requests=0, bytes read=15555222" Apr 17 23:30:19.418938 containerd[1455]: time="2026-04-17T23:30:19.418832192Z" level=info msg="ImageCreate event name:\"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:19.421675 containerd[1455]: time="2026-04-17T23:30:19.421558838Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:19.422436 containerd[1455]: time="2026-04-17T23:30:19.422404051Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.4\" with image id \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:9054fecb4fa04cc63aec47b0913c8deb3487d414190cd15211f864cfe0d0b4d6\", size \"17121655\" in 669.474314ms" Apr 17 23:30:19.422531 containerd[1455]: time="2026-04-17T23:30:19.422442823Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.4\" returns image reference \"sha256:a0eecd9b69a38f829c29b535f73c1a3de3c7cc9f1294a44dc42c808faf0a23ff\"" Apr 17 23:30:19.423161 containerd[1455]: time="2026-04-17T23:30:19.423121994Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\"" Apr 17 23:30:20.167158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2442910987.mount: Deactivated successfully. Apr 17 23:30:20.405000 containerd[1455]: time="2026-04-17T23:30:20.404932176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:20.405871 containerd[1455]: time="2026-04-17T23:30:20.405811633Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.4: active requests=0, bytes read=25699819" Apr 17 23:30:20.407150 containerd[1455]: time="2026-04-17T23:30:20.407032549Z" level=info msg="ImageCreate event name:\"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:20.409264 containerd[1455]: time="2026-04-17T23:30:20.409117067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:20.409515 containerd[1455]: time="2026-04-17T23:30:20.409456246Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.4\" with image id \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\", repo tag \"registry.k8s.io/kube-proxy:v1.35.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:c5daa23c72474e5e4062c320177d3b485fd42e7010f052bc80d657c4c00a0672\", size \"25698944\" in 986.289644ms" Apr 17 23:30:20.409515 containerd[1455]: time="2026-04-17T23:30:20.409509151Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.4\" returns image reference \"sha256:f21f27cddb23d0d7131dc7c59666b3b0e0b5ca4c3f003225f90307ab6211b6e1\"" Apr 17 23:30:20.410141 containerd[1455]: time="2026-04-17T23:30:20.410100338Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 17 23:30:20.798237 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:30:20.808222 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:30:20.817846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1257378504.mount: Deactivated successfully. Apr 17 23:30:20.946770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:30:20.951587 (kubelet)[1888]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:30:20.995873 kubelet[1888]: E0417 23:30:20.995814 1888 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:30:20.999030 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:30:20.999148 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:30:21.788957 containerd[1455]: time="2026-04-17T23:30:21.788826655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:21.790150 containerd[1455]: time="2026-04-17T23:30:21.790020746Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23555980" Apr 17 23:30:21.791506 containerd[1455]: time="2026-04-17T23:30:21.791405607Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:21.795694 containerd[1455]: time="2026-04-17T23:30:21.795638859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:21.796612 containerd[1455]: time="2026-04-17T23:30:21.796555252Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 1.386406175s" Apr 17 23:30:21.796654 containerd[1455]: time="2026-04-17T23:30:21.796612516Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 17 23:30:21.797367 containerd[1455]: time="2026-04-17T23:30:21.797286383Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 17 23:30:22.219765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3514744832.mount: Deactivated successfully. Apr 17 23:30:22.227054 containerd[1455]: time="2026-04-17T23:30:22.226973915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:22.229025 containerd[1455]: time="2026-04-17T23:30:22.228911175Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321150" Apr 17 23:30:22.230006 containerd[1455]: time="2026-04-17T23:30:22.229939972Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:22.234946 containerd[1455]: time="2026-04-17T23:30:22.234830559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:22.236518 containerd[1455]: time="2026-04-17T23:30:22.236226521Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 438.704634ms" Apr 17 23:30:22.236518 containerd[1455]: time="2026-04-17T23:30:22.236283633Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 17 23:30:22.237409 containerd[1455]: time="2026-04-17T23:30:22.237259777Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 17 23:30:22.610099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3923869525.mount: Deactivated successfully. Apr 17 23:30:23.305687 containerd[1455]: time="2026-04-17T23:30:23.305473236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:23.306190 containerd[1455]: time="2026-04-17T23:30:23.306079509Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23643979" Apr 17 23:30:23.307512 containerd[1455]: time="2026-04-17T23:30:23.307384113Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:23.310674 containerd[1455]: time="2026-04-17T23:30:23.310597464Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:23.311628 containerd[1455]: time="2026-04-17T23:30:23.311436193Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.074019658s" Apr 17 23:30:23.311628 containerd[1455]: time="2026-04-17T23:30:23.311472511Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 17 23:30:24.700083 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:30:24.722137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:30:24.753603 systemd[1]: Reloading requested from client PID 2041 ('systemctl') (unit session-7.scope)... Apr 17 23:30:24.753649 systemd[1]: Reloading... Apr 17 23:30:24.829441 zram_generator::config[2083]: No configuration found. Apr 17 23:30:24.917716 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:30:24.969517 systemd[1]: Reloading finished in 215 ms. Apr 17 23:30:25.019369 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:30:25.022800 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:30:25.023187 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:30:25.024749 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:30:25.161994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:30:25.167368 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:30:25.236477 kubelet[2130]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:30:25.433039 kubelet[2130]: I0417 23:30:25.432728 2130 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 17 23:30:25.433039 kubelet[2130]: I0417 23:30:25.432964 2130 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:30:25.433039 kubelet[2130]: I0417 23:30:25.432994 2130 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:30:25.433039 kubelet[2130]: I0417 23:30:25.432998 2130 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:30:25.433622 kubelet[2130]: I0417 23:30:25.433553 2130 server.go:951] "Client rotation is on, will bootstrap in background" Apr 17 23:30:25.470449 kubelet[2130]: E0417 23:30:25.470371 2130 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.28:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.28:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:30:25.470631 kubelet[2130]: I0417 23:30:25.470591 2130 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:30:25.474392 kubelet[2130]: E0417 23:30:25.474194 2130 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:30:25.474392 kubelet[2130]: I0417 23:30:25.474271 2130 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:30:25.480590 kubelet[2130]: I0417 23:30:25.479037 2130 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:30:25.481080 kubelet[2130]: I0417 23:30:25.481027 2130 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:30:25.481852 kubelet[2130]: I0417 23:30:25.481059 2130 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:30:25.481852 kubelet[2130]: I0417 23:30:25.481880 2130 topology_manager.go:143] "Creating topology manager with none policy" Apr 17 23:30:25.481852 kubelet[2130]: I0417 23:30:25.481889 2130 container_manager_linux.go:308] "Creating device plugin manager" Apr 17 23:30:25.482097 kubelet[2130]: I0417 23:30:25.481971 2130 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:30:25.483956 kubelet[2130]: I0417 23:30:25.483922 2130 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 17 23:30:25.484261 kubelet[2130]: I0417 23:30:25.484176 2130 kubelet.go:482] "Attempting to sync node with API server" Apr 17 23:30:25.484261 kubelet[2130]: I0417 23:30:25.484219 2130 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:30:25.484446 kubelet[2130]: I0417 23:30:25.484273 2130 kubelet.go:394] "Adding apiserver pod source" Apr 17 23:30:25.484446 kubelet[2130]: I0417 23:30:25.484285 2130 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:30:25.487177 kubelet[2130]: I0417 23:30:25.486997 2130 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:30:25.489641 kubelet[2130]: I0417 23:30:25.489239 2130 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:30:25.489641 kubelet[2130]: I0417 23:30:25.489613 2130 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:30:25.489827 kubelet[2130]: W0417 23:30:25.489759 2130 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:30:25.492829 kubelet[2130]: I0417 23:30:25.492797 2130 server.go:1257] "Started kubelet" Apr 17 23:30:25.492894 kubelet[2130]: I0417 23:30:25.492864 2130 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:30:25.493981 kubelet[2130]: I0417 23:30:25.493929 2130 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:30:25.494412 kubelet[2130]: I0417 23:30:25.494243 2130 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 17 23:30:25.497412 kubelet[2130]: I0417 23:30:25.494922 2130 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:30:25.497412 kubelet[2130]: I0417 23:30:25.494984 2130 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:30:25.497412 kubelet[2130]: I0417 23:30:25.495196 2130 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:30:25.497412 kubelet[2130]: I0417 23:30:25.495511 2130 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:30:25.497412 kubelet[2130]: E0417 23:30:25.497124 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:30:25.497412 kubelet[2130]: I0417 23:30:25.497158 2130 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 17 23:30:25.497630 kubelet[2130]: I0417 23:30:25.497619 2130 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:30:25.497713 kubelet[2130]: I0417 23:30:25.497706 2130 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:30:25.498545 kubelet[2130]: I0417 23:30:25.498529 2130 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:30:25.498688 kubelet[2130]: I0417 23:30:25.498674 2130 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:30:25.499041 kubelet[2130]: E0417 23:30:25.496865 2130 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.28:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.28:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a748cd811f2470 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 23:30:25.492747376 +0000 UTC m=+0.321635018,LastTimestamp:2026-04-17 23:30:25.492747376 +0000 UTC m=+0.321635018,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 23:30:25.499041 kubelet[2130]: E0417 23:30:25.498999 2130 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="200ms" Apr 17 23:30:25.500757 kubelet[2130]: I0417 23:30:25.500735 2130 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:30:25.501541 kubelet[2130]: E0417 23:30:25.501477 2130 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:30:25.514379 kubelet[2130]: I0417 23:30:25.514283 2130 cpu_manager.go:225] "Starting" policy="none" Apr 17 23:30:25.514379 kubelet[2130]: I0417 23:30:25.514368 2130 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 17 23:30:25.514379 kubelet[2130]: I0417 23:30:25.514383 2130 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 17 23:30:25.516739 kubelet[2130]: I0417 23:30:25.516673 2130 policy_none.go:50] "Start" Apr 17 23:30:25.516739 kubelet[2130]: I0417 23:30:25.516715 2130 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:30:25.516739 kubelet[2130]: I0417 23:30:25.516723 2130 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:30:25.519659 kubelet[2130]: I0417 23:30:25.519594 2130 policy_none.go:44] "Start" Apr 17 23:30:25.527268 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:30:25.529040 kubelet[2130]: I0417 23:30:25.528944 2130 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:30:25.531043 kubelet[2130]: I0417 23:30:25.530989 2130 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:30:25.531043 kubelet[2130]: I0417 23:30:25.531004 2130 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 17 23:30:25.531043 kubelet[2130]: I0417 23:30:25.531025 2130 kubelet.go:2501] "Starting kubelet main sync loop" Apr 17 23:30:25.531161 kubelet[2130]: E0417 23:30:25.531068 2130 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:30:25.542504 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:30:25.546225 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:30:25.562773 kubelet[2130]: E0417 23:30:25.562724 2130 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:30:25.563389 kubelet[2130]: I0417 23:30:25.563086 2130 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 17 23:30:25.563389 kubelet[2130]: I0417 23:30:25.563101 2130 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:30:25.563389 kubelet[2130]: I0417 23:30:25.563352 2130 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 17 23:30:25.565585 kubelet[2130]: E0417 23:30:25.565438 2130 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:30:25.565739 kubelet[2130]: E0417 23:30:25.565670 2130 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 23:30:25.643718 systemd[1]: Created slice kubepods-burstable-pod2a6ba440d6d2a87919ce245d2bf4225e.slice - libcontainer container kubepods-burstable-pod2a6ba440d6d2a87919ce245d2bf4225e.slice. Apr 17 23:30:25.667816 kubelet[2130]: I0417 23:30:25.667667 2130 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 23:30:25.668253 kubelet[2130]: E0417 23:30:25.668193 2130 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Apr 17 23:30:25.671525 kubelet[2130]: E0417 23:30:25.671461 2130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:30:25.674228 systemd[1]: Created slice kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice - libcontainer container kubepods-burstable-pod14bc29ec35edba17af38052ec24275f2.slice. Apr 17 23:30:25.687405 kubelet[2130]: E0417 23:30:25.687215 2130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:30:25.690639 systemd[1]: Created slice kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice - libcontainer container kubepods-burstable-podf7c88b30fc803a3ec6b6c138191bdaca.slice. Apr 17 23:30:25.692529 kubelet[2130]: E0417 23:30:25.692479 2130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:30:25.699908 kubelet[2130]: E0417 23:30:25.699722 2130 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="400ms" Apr 17 23:30:25.800155 kubelet[2130]: I0417 23:30:25.799732 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a6ba440d6d2a87919ce245d2bf4225e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2a6ba440d6d2a87919ce245d2bf4225e\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:25.800155 kubelet[2130]: I0417 23:30:25.799798 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a6ba440d6d2a87919ce245d2bf4225e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2a6ba440d6d2a87919ce245d2bf4225e\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:25.800155 kubelet[2130]: I0417 23:30:25.799926 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 17 23:30:25.800155 kubelet[2130]: I0417 23:30:25.799943 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a6ba440d6d2a87919ce245d2bf4225e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2a6ba440d6d2a87919ce245d2bf4225e\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:25.800155 kubelet[2130]: I0417 23:30:25.800086 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:25.800560 kubelet[2130]: I0417 23:30:25.800101 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:25.800560 kubelet[2130]: I0417 23:30:25.800114 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:25.800560 kubelet[2130]: I0417 23:30:25.800134 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:25.800560 kubelet[2130]: I0417 23:30:25.800153 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:25.871945 kubelet[2130]: I0417 23:30:25.871633 2130 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 23:30:25.872079 kubelet[2130]: E0417 23:30:25.872015 2130 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Apr 17 23:30:25.977193 kubelet[2130]: E0417 23:30:25.977078 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:25.978533 containerd[1455]: time="2026-04-17T23:30:25.978230665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2a6ba440d6d2a87919ce245d2bf4225e,Namespace:kube-system,Attempt:0,}" Apr 17 23:30:25.991188 kubelet[2130]: E0417 23:30:25.991116 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:25.991972 containerd[1455]: time="2026-04-17T23:30:25.991909560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,}" Apr 17 23:30:25.995505 kubelet[2130]: E0417 23:30:25.995431 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:25.995942 containerd[1455]: time="2026-04-17T23:30:25.995902469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,}" Apr 17 23:30:26.101405 kubelet[2130]: E0417 23:30:26.101061 2130 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="800ms" Apr 17 23:30:26.276280 kubelet[2130]: I0417 23:30:26.276235 2130 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 23:30:26.276934 kubelet[2130]: E0417 23:30:26.276823 2130 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.0.0.28:6443/api/v1/nodes\": dial tcp 10.0.0.28:6443: connect: connection refused" node="localhost" Apr 17 23:30:26.401756 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3667760950.mount: Deactivated successfully. Apr 17 23:30:26.411913 containerd[1455]: time="2026-04-17T23:30:26.411771138Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:30:26.414557 containerd[1455]: time="2026-04-17T23:30:26.414155736Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:30:26.416390 containerd[1455]: time="2026-04-17T23:30:26.416195950Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:30:26.423940 containerd[1455]: time="2026-04-17T23:30:26.423741070Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:30:26.427063 containerd[1455]: time="2026-04-17T23:30:26.426963639Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:30:26.428553 containerd[1455]: time="2026-04-17T23:30:26.428511416Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:30:26.429590 containerd[1455]: time="2026-04-17T23:30:26.429510647Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 17 23:30:26.431792 containerd[1455]: time="2026-04-17T23:30:26.431696718Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:30:26.434000 containerd[1455]: time="2026-04-17T23:30:26.433932941Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 437.959919ms" Apr 17 23:30:26.436690 containerd[1455]: time="2026-04-17T23:30:26.436631971Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 458.26117ms" Apr 17 23:30:26.437467 containerd[1455]: time="2026-04-17T23:30:26.437395066Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 445.388244ms" Apr 17 23:30:26.581625 containerd[1455]: time="2026-04-17T23:30:26.579455703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:30:26.581625 containerd[1455]: time="2026-04-17T23:30:26.580360679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:30:26.581625 containerd[1455]: time="2026-04-17T23:30:26.580372302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:26.581625 containerd[1455]: time="2026-04-17T23:30:26.580425346Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:26.583372 containerd[1455]: time="2026-04-17T23:30:26.583074621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:30:26.583372 containerd[1455]: time="2026-04-17T23:30:26.583176475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:30:26.583372 containerd[1455]: time="2026-04-17T23:30:26.583188850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:26.583372 containerd[1455]: time="2026-04-17T23:30:26.583237579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:26.587059 containerd[1455]: time="2026-04-17T23:30:26.586869527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:30:26.587059 containerd[1455]: time="2026-04-17T23:30:26.586929947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:30:26.587059 containerd[1455]: time="2026-04-17T23:30:26.586942985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:26.587675 containerd[1455]: time="2026-04-17T23:30:26.587489374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:26.606916 systemd[1]: Started cri-containerd-3c562803f533d7f62faa00a5398c67965dbd95be04c1efe6b0cfe563bca7c5f8.scope - libcontainer container 3c562803f533d7f62faa00a5398c67965dbd95be04c1efe6b0cfe563bca7c5f8. Apr 17 23:30:26.613268 systemd[1]: Started cri-containerd-92f2a94ffda34935ccfd844bd74b79781038cf7e422834d7f6ca436ab64df67d.scope - libcontainer container 92f2a94ffda34935ccfd844bd74b79781038cf7e422834d7f6ca436ab64df67d. Apr 17 23:30:26.614828 systemd[1]: Started cri-containerd-cde8d0a9a583c83315dbbe9c4687550fe1268efb833c29038b1d858f59a771b8.scope - libcontainer container cde8d0a9a583c83315dbbe9c4687550fe1268efb833c29038b1d858f59a771b8. Apr 17 23:30:26.663985 containerd[1455]: time="2026-04-17T23:30:26.663892923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:2a6ba440d6d2a87919ce245d2bf4225e,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c562803f533d7f62faa00a5398c67965dbd95be04c1efe6b0cfe563bca7c5f8\"" Apr 17 23:30:26.665350 kubelet[2130]: E0417 23:30:26.665187 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:26.670650 containerd[1455]: time="2026-04-17T23:30:26.670449736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:14bc29ec35edba17af38052ec24275f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"cde8d0a9a583c83315dbbe9c4687550fe1268efb833c29038b1d858f59a771b8\"" Apr 17 23:30:26.671417 containerd[1455]: time="2026-04-17T23:30:26.671397181Z" level=info msg="CreateContainer within sandbox \"3c562803f533d7f62faa00a5398c67965dbd95be04c1efe6b0cfe563bca7c5f8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:30:26.673189 kubelet[2130]: E0417 23:30:26.671808 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:26.678274 containerd[1455]: time="2026-04-17T23:30:26.678112047Z" level=info msg="CreateContainer within sandbox \"cde8d0a9a583c83315dbbe9c4687550fe1268efb833c29038b1d858f59a771b8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:30:26.689936 containerd[1455]: time="2026-04-17T23:30:26.689778604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:f7c88b30fc803a3ec6b6c138191bdaca,Namespace:kube-system,Attempt:0,} returns sandbox id \"92f2a94ffda34935ccfd844bd74b79781038cf7e422834d7f6ca436ab64df67d\"" Apr 17 23:30:26.690627 kubelet[2130]: E0417 23:30:26.690591 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:26.697148 containerd[1455]: time="2026-04-17T23:30:26.695987657Z" level=info msg="CreateContainer within sandbox \"3c562803f533d7f62faa00a5398c67965dbd95be04c1efe6b0cfe563bca7c5f8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6c34bc39454cb3431458b8cbe66571b5e1d095a9a1f99ee41b3054b6f934b602\"" Apr 17 23:30:26.698046 containerd[1455]: time="2026-04-17T23:30:26.697904000Z" level=info msg="StartContainer for \"6c34bc39454cb3431458b8cbe66571b5e1d095a9a1f99ee41b3054b6f934b602\"" Apr 17 23:30:26.698639 containerd[1455]: time="2026-04-17T23:30:26.698485706Z" level=info msg="CreateContainer within sandbox \"92f2a94ffda34935ccfd844bd74b79781038cf7e422834d7f6ca436ab64df67d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:30:26.704922 containerd[1455]: time="2026-04-17T23:30:26.704634254Z" level=info msg="CreateContainer within sandbox \"cde8d0a9a583c83315dbbe9c4687550fe1268efb833c29038b1d858f59a771b8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7ea00d1c649a1afed1377916aad9830c9b50254811d9126596b18ef1580e85ea\"" Apr 17 23:30:26.706238 containerd[1455]: time="2026-04-17T23:30:26.706129842Z" level=info msg="StartContainer for \"7ea00d1c649a1afed1377916aad9830c9b50254811d9126596b18ef1580e85ea\"" Apr 17 23:30:26.716859 containerd[1455]: time="2026-04-17T23:30:26.716758540Z" level=info msg="CreateContainer within sandbox \"92f2a94ffda34935ccfd844bd74b79781038cf7e422834d7f6ca436ab64df67d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"eb135675cbac408b1ceae257ace00061664828d64dcc5f2a9d3cf6bb9e2c22e0\"" Apr 17 23:30:26.717909 containerd[1455]: time="2026-04-17T23:30:26.717848289Z" level=info msg="StartContainer for \"eb135675cbac408b1ceae257ace00061664828d64dcc5f2a9d3cf6bb9e2c22e0\"" Apr 17 23:30:26.740992 systemd[1]: Started cri-containerd-6c34bc39454cb3431458b8cbe66571b5e1d095a9a1f99ee41b3054b6f934b602.scope - libcontainer container 6c34bc39454cb3431458b8cbe66571b5e1d095a9a1f99ee41b3054b6f934b602. Apr 17 23:30:26.754183 systemd[1]: Started cri-containerd-7ea00d1c649a1afed1377916aad9830c9b50254811d9126596b18ef1580e85ea.scope - libcontainer container 7ea00d1c649a1afed1377916aad9830c9b50254811d9126596b18ef1580e85ea. Apr 17 23:30:26.761554 systemd[1]: Started cri-containerd-eb135675cbac408b1ceae257ace00061664828d64dcc5f2a9d3cf6bb9e2c22e0.scope - libcontainer container eb135675cbac408b1ceae257ace00061664828d64dcc5f2a9d3cf6bb9e2c22e0. Apr 17 23:30:26.814592 containerd[1455]: time="2026-04-17T23:30:26.814459723Z" level=info msg="StartContainer for \"6c34bc39454cb3431458b8cbe66571b5e1d095a9a1f99ee41b3054b6f934b602\" returns successfully" Apr 17 23:30:26.836383 containerd[1455]: time="2026-04-17T23:30:26.835879070Z" level=info msg="StartContainer for \"eb135675cbac408b1ceae257ace00061664828d64dcc5f2a9d3cf6bb9e2c22e0\" returns successfully" Apr 17 23:30:26.846564 containerd[1455]: time="2026-04-17T23:30:26.846384032Z" level=info msg="StartContainer for \"7ea00d1c649a1afed1377916aad9830c9b50254811d9126596b18ef1580e85ea\" returns successfully" Apr 17 23:30:26.908858 kubelet[2130]: E0417 23:30:26.908042 2130 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.28:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.28:6443: connect: connection refused" interval="1.6s" Apr 17 23:30:27.078818 kubelet[2130]: I0417 23:30:27.078649 2130 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 23:30:27.546962 kubelet[2130]: E0417 23:30:27.546787 2130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:30:27.546962 kubelet[2130]: E0417 23:30:27.546930 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:27.549355 kubelet[2130]: E0417 23:30:27.549037 2130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:30:27.549355 kubelet[2130]: E0417 23:30:27.549123 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:27.550740 kubelet[2130]: E0417 23:30:27.550640 2130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:30:27.550740 kubelet[2130]: E0417 23:30:27.550708 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:27.939648 kubelet[2130]: I0417 23:30:27.939221 2130 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 17 23:30:27.939648 kubelet[2130]: E0417 23:30:27.939547 2130 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 17 23:30:27.977052 kubelet[2130]: E0417 23:30:27.976998 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:30:28.077990 kubelet[2130]: E0417 23:30:28.077861 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:30:28.179130 kubelet[2130]: E0417 23:30:28.178980 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:30:28.279822 kubelet[2130]: E0417 23:30:28.279335 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:30:28.409659 kernel: hrtimer: interrupt took 5796944 ns Apr 17 23:30:28.429774 kubelet[2130]: E0417 23:30:28.411771 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:30:28.518804 kubelet[2130]: E0417 23:30:28.517860 2130 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:30:28.554784 kubelet[2130]: E0417 23:30:28.554603 2130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:30:28.554784 kubelet[2130]: E0417 23:30:28.554739 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:28.555221 kubelet[2130]: E0417 23:30:28.554825 2130 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:30:28.555221 kubelet[2130]: E0417 23:30:28.554926 2130 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:28.700243 kubelet[2130]: I0417 23:30:28.699931 2130 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:28.714485 kubelet[2130]: E0417 23:30:28.714282 2130 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:28.714485 kubelet[2130]: I0417 23:30:28.714437 2130 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:28.720841 kubelet[2130]: E0417 23:30:28.720548 2130 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:28.720841 kubelet[2130]: I0417 23:30:28.720617 2130 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:30:28.724969 kubelet[2130]: E0417 23:30:28.724871 2130 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 17 23:30:29.487963 kubelet[2130]: I0417 23:30:29.487901 2130 apiserver.go:52] "Watching apiserver" Apr 17 23:30:29.498780 kubelet[2130]: I0417 23:30:29.498587 2130 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:30:30.630211 systemd[1]: Reloading requested from client PID 2423 ('systemctl') (unit session-7.scope)... Apr 17 23:30:30.630232 systemd[1]: Reloading... Apr 17 23:30:30.736809 zram_generator::config[2462]: No configuration found. Apr 17 23:30:30.956152 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:30:31.049562 systemd[1]: Reloading finished in 418 ms. Apr 17 23:30:31.085397 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:30:31.102886 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:30:31.103232 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:30:31.103374 systemd[1]: kubelet.service: Consumed 1.025s CPU time, 129.8M memory peak, 0B memory swap peak. Apr 17 23:30:31.111873 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:30:31.262938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:30:31.278864 (kubelet)[2507]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:30:31.343562 kubelet[2507]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:30:31.357003 kubelet[2507]: I0417 23:30:31.356844 2507 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 17 23:30:31.357003 kubelet[2507]: I0417 23:30:31.356940 2507 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:30:31.357003 kubelet[2507]: I0417 23:30:31.356952 2507 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 17 23:30:31.357003 kubelet[2507]: I0417 23:30:31.356958 2507 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:30:31.357408 kubelet[2507]: I0417 23:30:31.357184 2507 server.go:951] "Client rotation is on, will bootstrap in background" Apr 17 23:30:31.358422 kubelet[2507]: I0417 23:30:31.358352 2507 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:30:31.360585 kubelet[2507]: I0417 23:30:31.360514 2507 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:30:31.364117 kubelet[2507]: E0417 23:30:31.364064 2507 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:30:31.364229 kubelet[2507]: I0417 23:30:31.364142 2507 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 17 23:30:31.368449 kubelet[2507]: I0417 23:30:31.368411 2507 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 17 23:30:31.368599 kubelet[2507]: I0417 23:30:31.368545 2507 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:30:31.368741 kubelet[2507]: I0417 23:30:31.368590 2507 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:30:31.368912 kubelet[2507]: I0417 23:30:31.368745 2507 topology_manager.go:143] "Creating topology manager with none policy" Apr 17 23:30:31.368912 kubelet[2507]: I0417 23:30:31.368755 2507 container_manager_linux.go:308] "Creating device plugin manager" Apr 17 23:30:31.368912 kubelet[2507]: I0417 23:30:31.368777 2507 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 17 23:30:31.369081 kubelet[2507]: I0417 23:30:31.369019 2507 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 17 23:30:31.369190 kubelet[2507]: I0417 23:30:31.369167 2507 kubelet.go:482] "Attempting to sync node with API server" Apr 17 23:30:31.369207 kubelet[2507]: I0417 23:30:31.369189 2507 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:30:31.369222 kubelet[2507]: I0417 23:30:31.369207 2507 kubelet.go:394] "Adding apiserver pod source" Apr 17 23:30:31.369222 kubelet[2507]: I0417 23:30:31.369217 2507 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:30:31.371243 kubelet[2507]: I0417 23:30:31.371228 2507 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:30:31.372238 kubelet[2507]: I0417 23:30:31.372220 2507 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:30:31.372431 kubelet[2507]: I0417 23:30:31.372423 2507 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 17 23:30:31.378520 kubelet[2507]: I0417 23:30:31.378461 2507 server.go:1257] "Started kubelet" Apr 17 23:30:31.380415 kubelet[2507]: I0417 23:30:31.380222 2507 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 17 23:30:31.382747 kubelet[2507]: I0417 23:30:31.382674 2507 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:30:31.383451 kubelet[2507]: I0417 23:30:31.383380 2507 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:30:31.383498 kubelet[2507]: I0417 23:30:31.383472 2507 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 17 23:30:31.383747 kubelet[2507]: I0417 23:30:31.383687 2507 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:30:31.385184 kubelet[2507]: I0417 23:30:31.385127 2507 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:30:31.386170 kubelet[2507]: I0417 23:30:31.385960 2507 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:30:31.389261 kubelet[2507]: E0417 23:30:31.388064 2507 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:30:31.392392 kubelet[2507]: I0417 23:30:31.392246 2507 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 17 23:30:31.392477 kubelet[2507]: I0417 23:30:31.392450 2507 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 17 23:30:31.392987 kubelet[2507]: I0417 23:30:31.392573 2507 reconciler.go:29] "Reconciler: start to sync state" Apr 17 23:30:31.397631 kubelet[2507]: I0417 23:30:31.397266 2507 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:30:31.397631 kubelet[2507]: I0417 23:30:31.397368 2507 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:30:31.397631 kubelet[2507]: I0417 23:30:31.397578 2507 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:30:31.410976 kubelet[2507]: I0417 23:30:31.410855 2507 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 17 23:30:31.413577 kubelet[2507]: I0417 23:30:31.413520 2507 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 17 23:30:31.413577 kubelet[2507]: I0417 23:30:31.413535 2507 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 17 23:30:31.413577 kubelet[2507]: I0417 23:30:31.413557 2507 kubelet.go:2501] "Starting kubelet main sync loop" Apr 17 23:30:31.413897 kubelet[2507]: E0417 23:30:31.413782 2507 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:30:31.434170 kubelet[2507]: I0417 23:30:31.434141 2507 cpu_manager.go:225] "Starting" policy="none" Apr 17 23:30:31.434419 kubelet[2507]: I0417 23:30:31.434408 2507 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 17 23:30:31.434463 kubelet[2507]: I0417 23:30:31.434458 2507 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 17 23:30:31.434668 kubelet[2507]: I0417 23:30:31.434605 2507 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 17 23:30:31.434923 kubelet[2507]: I0417 23:30:31.434783 2507 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 17 23:30:31.434923 kubelet[2507]: I0417 23:30:31.434801 2507 policy_none.go:50] "Start" Apr 17 23:30:31.434923 kubelet[2507]: I0417 23:30:31.434807 2507 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 17 23:30:31.435243 kubelet[2507]: I0417 23:30:31.435164 2507 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 17 23:30:31.435614 kubelet[2507]: I0417 23:30:31.435604 2507 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 17 23:30:31.435661 kubelet[2507]: I0417 23:30:31.435657 2507 policy_none.go:44] "Start" Apr 17 23:30:31.441229 kubelet[2507]: E0417 23:30:31.441182 2507 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:30:31.441579 kubelet[2507]: I0417 23:30:31.441534 2507 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 17 23:30:31.441721 kubelet[2507]: I0417 23:30:31.441675 2507 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:30:31.442087 kubelet[2507]: I0417 23:30:31.441984 2507 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 17 23:30:31.443655 kubelet[2507]: E0417 23:30:31.443377 2507 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:30:31.515780 kubelet[2507]: I0417 23:30:31.515512 2507 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:30:31.515780 kubelet[2507]: I0417 23:30:31.515510 2507 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:31.515925 kubelet[2507]: I0417 23:30:31.515909 2507 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:31.552041 kubelet[2507]: I0417 23:30:31.551933 2507 kubelet_node_status.go:74] "Attempting to register node" node="localhost" Apr 17 23:30:31.560856 kubelet[2507]: I0417 23:30:31.560771 2507 kubelet_node_status.go:123] "Node was previously registered" node="localhost" Apr 17 23:30:31.560856 kubelet[2507]: I0417 23:30:31.560829 2507 kubelet_node_status.go:77] "Successfully registered node" node="localhost" Apr 17 23:30:31.593103 kubelet[2507]: I0417 23:30:31.592906 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:31.593103 kubelet[2507]: I0417 23:30:31.592936 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:31.593103 kubelet[2507]: I0417 23:30:31.593079 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f7c88b30fc803a3ec6b6c138191bdaca-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"f7c88b30fc803a3ec6b6c138191bdaca\") " pod="kube-system/kube-scheduler-localhost" Apr 17 23:30:31.593103 kubelet[2507]: I0417 23:30:31.593102 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2a6ba440d6d2a87919ce245d2bf4225e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"2a6ba440d6d2a87919ce245d2bf4225e\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:31.593103 kubelet[2507]: I0417 23:30:31.593115 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2a6ba440d6d2a87919ce245d2bf4225e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"2a6ba440d6d2a87919ce245d2bf4225e\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:31.593633 kubelet[2507]: I0417 23:30:31.593136 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2a6ba440d6d2a87919ce245d2bf4225e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"2a6ba440d6d2a87919ce245d2bf4225e\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:30:31.593633 kubelet[2507]: I0417 23:30:31.593147 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:31.593633 kubelet[2507]: I0417 23:30:31.593157 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:31.593633 kubelet[2507]: I0417 23:30:31.593171 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/14bc29ec35edba17af38052ec24275f2-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"14bc29ec35edba17af38052ec24275f2\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:30:31.823843 kubelet[2507]: E0417 23:30:31.823655 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:31.823843 kubelet[2507]: E0417 23:30:31.823794 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:31.824191 kubelet[2507]: E0417 23:30:31.824028 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:32.371374 kubelet[2507]: I0417 23:30:32.371244 2507 apiserver.go:52] "Watching apiserver" Apr 17 23:30:32.393176 kubelet[2507]: I0417 23:30:32.393054 2507 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 17 23:30:32.425841 kubelet[2507]: E0417 23:30:32.425152 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:32.426122 kubelet[2507]: E0417 23:30:32.426052 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:32.426535 kubelet[2507]: E0417 23:30:32.426477 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:32.470105 kubelet[2507]: I0417 23:30:32.470012 2507 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.469997915 podStartE2EDuration="1.469997915s" podCreationTimestamp="2026-04-17 23:30:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:30:32.463444159 +0000 UTC m=+1.178649016" watchObservedRunningTime="2026-04-17 23:30:32.469997915 +0000 UTC m=+1.185202770" Apr 17 23:30:32.483533 kubelet[2507]: I0417 23:30:32.483467 2507 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.483452767 podStartE2EDuration="1.483452767s" podCreationTimestamp="2026-04-17 23:30:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:30:32.482652314 +0000 UTC m=+1.197857169" watchObservedRunningTime="2026-04-17 23:30:32.483452767 +0000 UTC m=+1.198657613" Apr 17 23:30:32.507218 kubelet[2507]: I0417 23:30:32.506943 2507 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.5069306519999999 podStartE2EDuration="1.506930652s" podCreationTimestamp="2026-04-17 23:30:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:30:32.494266052 +0000 UTC m=+1.209470913" watchObservedRunningTime="2026-04-17 23:30:32.506930652 +0000 UTC m=+1.222135505" Apr 17 23:30:33.427232 kubelet[2507]: E0417 23:30:33.427143 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:33.427913 kubelet[2507]: E0417 23:30:33.427824 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:34.430629 kubelet[2507]: E0417 23:30:34.430080 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:34.430629 kubelet[2507]: E0417 23:30:34.430207 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:35.431786 kubelet[2507]: E0417 23:30:35.431658 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:36.559535 kubelet[2507]: I0417 23:30:36.559423 2507 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:30:36.560833 containerd[1455]: time="2026-04-17T23:30:36.560774366Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:30:36.561185 kubelet[2507]: I0417 23:30:36.561150 2507 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:30:37.274777 systemd[1]: Created slice kubepods-besteffort-poddf6452a0_3bbb_44d9_9777_8eaa7a1ba135.slice - libcontainer container kubepods-besteffort-poddf6452a0_3bbb_44d9_9777_8eaa7a1ba135.slice. Apr 17 23:30:37.448610 kubelet[2507]: I0417 23:30:37.448115 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/df6452a0-3bbb-44d9-9777-8eaa7a1ba135-xtables-lock\") pod \"kube-proxy-tcwpd\" (UID: \"df6452a0-3bbb-44d9-9777-8eaa7a1ba135\") " pod="kube-system/kube-proxy-tcwpd" Apr 17 23:30:37.448610 kubelet[2507]: I0417 23:30:37.448588 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/df6452a0-3bbb-44d9-9777-8eaa7a1ba135-kube-proxy\") pod \"kube-proxy-tcwpd\" (UID: \"df6452a0-3bbb-44d9-9777-8eaa7a1ba135\") " pod="kube-system/kube-proxy-tcwpd" Apr 17 23:30:37.448823 kubelet[2507]: I0417 23:30:37.448661 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/df6452a0-3bbb-44d9-9777-8eaa7a1ba135-lib-modules\") pod \"kube-proxy-tcwpd\" (UID: \"df6452a0-3bbb-44d9-9777-8eaa7a1ba135\") " pod="kube-system/kube-proxy-tcwpd" Apr 17 23:30:37.448823 kubelet[2507]: I0417 23:30:37.448675 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfp9z\" (UniqueName: \"kubernetes.io/projected/df6452a0-3bbb-44d9-9777-8eaa7a1ba135-kube-api-access-wfp9z\") pod \"kube-proxy-tcwpd\" (UID: \"df6452a0-3bbb-44d9-9777-8eaa7a1ba135\") " pod="kube-system/kube-proxy-tcwpd" Apr 17 23:30:37.466106 kubelet[2507]: E0417 23:30:37.466042 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:37.587788 kubelet[2507]: E0417 23:30:37.587584 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:37.589064 containerd[1455]: time="2026-04-17T23:30:37.588593125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tcwpd,Uid:df6452a0-3bbb-44d9-9777-8eaa7a1ba135,Namespace:kube-system,Attempt:0,}" Apr 17 23:30:37.621532 containerd[1455]: time="2026-04-17T23:30:37.621063704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:30:37.621532 containerd[1455]: time="2026-04-17T23:30:37.621189778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:30:37.621532 containerd[1455]: time="2026-04-17T23:30:37.621204723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:37.621532 containerd[1455]: time="2026-04-17T23:30:37.621358680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:37.653871 systemd[1]: Started cri-containerd-2ce4eef0e9fafa7b35990291d86a97aa1af317f1fb7e8e12b9c689f96cc30e81.scope - libcontainer container 2ce4eef0e9fafa7b35990291d86a97aa1af317f1fb7e8e12b9c689f96cc30e81. Apr 17 23:30:37.692697 containerd[1455]: time="2026-04-17T23:30:37.692514926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tcwpd,Uid:df6452a0-3bbb-44d9-9777-8eaa7a1ba135,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ce4eef0e9fafa7b35990291d86a97aa1af317f1fb7e8e12b9c689f96cc30e81\"" Apr 17 23:30:37.694853 kubelet[2507]: E0417 23:30:37.694157 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:37.708519 containerd[1455]: time="2026-04-17T23:30:37.708359167Z" level=info msg="CreateContainer within sandbox \"2ce4eef0e9fafa7b35990291d86a97aa1af317f1fb7e8e12b9c689f96cc30e81\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:30:37.748013 containerd[1455]: time="2026-04-17T23:30:37.747727578Z" level=info msg="CreateContainer within sandbox \"2ce4eef0e9fafa7b35990291d86a97aa1af317f1fb7e8e12b9c689f96cc30e81\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"45211ec147d96ea1cfafe35cc47d70e83d019025fde58960a20a5ff8b9c82fa1\"" Apr 17 23:30:37.751567 containerd[1455]: time="2026-04-17T23:30:37.751273947Z" level=info msg="StartContainer for \"45211ec147d96ea1cfafe35cc47d70e83d019025fde58960a20a5ff8b9c82fa1\"" Apr 17 23:30:37.782616 systemd[1]: Created slice kubepods-besteffort-pod23e55092_39a3_4f52_9e69_6ab6531bf1dd.slice - libcontainer container kubepods-besteffort-pod23e55092_39a3_4f52_9e69_6ab6531bf1dd.slice. Apr 17 23:30:37.832725 systemd[1]: Started cri-containerd-45211ec147d96ea1cfafe35cc47d70e83d019025fde58960a20a5ff8b9c82fa1.scope - libcontainer container 45211ec147d96ea1cfafe35cc47d70e83d019025fde58960a20a5ff8b9c82fa1. Apr 17 23:30:37.852100 kubelet[2507]: I0417 23:30:37.851768 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw672\" (UniqueName: \"kubernetes.io/projected/23e55092-39a3-4f52-9e69-6ab6531bf1dd-kube-api-access-cw672\") pod \"tigera-operator-6cf4cccc57-d5c97\" (UID: \"23e55092-39a3-4f52-9e69-6ab6531bf1dd\") " pod="tigera-operator/tigera-operator-6cf4cccc57-d5c97" Apr 17 23:30:37.852100 kubelet[2507]: I0417 23:30:37.851829 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/23e55092-39a3-4f52-9e69-6ab6531bf1dd-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-d5c97\" (UID: \"23e55092-39a3-4f52-9e69-6ab6531bf1dd\") " pod="tigera-operator/tigera-operator-6cf4cccc57-d5c97" Apr 17 23:30:37.880017 containerd[1455]: time="2026-04-17T23:30:37.879939644Z" level=info msg="StartContainer for \"45211ec147d96ea1cfafe35cc47d70e83d019025fde58960a20a5ff8b9c82fa1\" returns successfully" Apr 17 23:30:38.103208 containerd[1455]: time="2026-04-17T23:30:38.102878941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-d5c97,Uid:23e55092-39a3-4f52-9e69-6ab6531bf1dd,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:30:38.139430 containerd[1455]: time="2026-04-17T23:30:38.138831209Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:30:38.139430 containerd[1455]: time="2026-04-17T23:30:38.138951593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:30:38.140642 containerd[1455]: time="2026-04-17T23:30:38.140547853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:38.140774 containerd[1455]: time="2026-04-17T23:30:38.140732613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:38.164703 systemd[1]: Started cri-containerd-ff7db0fa4111568fe933406c1d39003623ed31f4329343f6ce5566ae537ebf3e.scope - libcontainer container ff7db0fa4111568fe933406c1d39003623ed31f4329343f6ce5566ae537ebf3e. Apr 17 23:30:38.228612 containerd[1455]: time="2026-04-17T23:30:38.228442940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-d5c97,Uid:23e55092-39a3-4f52-9e69-6ab6531bf1dd,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ff7db0fa4111568fe933406c1d39003623ed31f4329343f6ce5566ae537ebf3e\"" Apr 17 23:30:38.233038 containerd[1455]: time="2026-04-17T23:30:38.232466346Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:30:38.443708 kubelet[2507]: E0417 23:30:38.443215 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:39.732789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3008579371.mount: Deactivated successfully. Apr 17 23:30:40.285088 kubelet[2507]: E0417 23:30:40.284965 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:40.299372 kubelet[2507]: I0417 23:30:40.299152 2507 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-tcwpd" podStartSLOduration=3.299137556 podStartE2EDuration="3.299137556s" podCreationTimestamp="2026-04-17 23:30:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:30:38.464483066 +0000 UTC m=+7.179687934" watchObservedRunningTime="2026-04-17 23:30:40.299137556 +0000 UTC m=+9.014342413" Apr 17 23:30:40.635284 containerd[1455]: time="2026-04-17T23:30:40.634953305Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:40.636642 containerd[1455]: time="2026-04-17T23:30:40.636547223Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 17 23:30:40.637710 containerd[1455]: time="2026-04-17T23:30:40.637644467Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:40.640382 containerd[1455]: time="2026-04-17T23:30:40.640260186Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:40.640879 containerd[1455]: time="2026-04-17T23:30:40.640826444Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.408320224s" Apr 17 23:30:40.640879 containerd[1455]: time="2026-04-17T23:30:40.640875122Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 17 23:30:40.649405 containerd[1455]: time="2026-04-17T23:30:40.649347068Z" level=info msg="CreateContainer within sandbox \"ff7db0fa4111568fe933406c1d39003623ed31f4329343f6ce5566ae537ebf3e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:30:40.662055 containerd[1455]: time="2026-04-17T23:30:40.661924858Z" level=info msg="CreateContainer within sandbox \"ff7db0fa4111568fe933406c1d39003623ed31f4329343f6ce5566ae537ebf3e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a8443fbf9962ccf141877e6d90cd240f9eded38a483b0278cf90802f1f23083a\"" Apr 17 23:30:40.663839 containerd[1455]: time="2026-04-17T23:30:40.662848673Z" level=info msg="StartContainer for \"a8443fbf9962ccf141877e6d90cd240f9eded38a483b0278cf90802f1f23083a\"" Apr 17 23:30:40.702607 systemd[1]: Started cri-containerd-a8443fbf9962ccf141877e6d90cd240f9eded38a483b0278cf90802f1f23083a.scope - libcontainer container a8443fbf9962ccf141877e6d90cd240f9eded38a483b0278cf90802f1f23083a. Apr 17 23:30:40.729795 containerd[1455]: time="2026-04-17T23:30:40.729677636Z" level=info msg="StartContainer for \"a8443fbf9962ccf141877e6d90cd240f9eded38a483b0278cf90802f1f23083a\" returns successfully" Apr 17 23:30:44.468420 kubelet[2507]: E0417 23:30:44.468364 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:44.489490 kubelet[2507]: I0417 23:30:44.489415 2507 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-d5c97" podStartSLOduration=5.078654465 podStartE2EDuration="7.489401102s" podCreationTimestamp="2026-04-17 23:30:37 +0000 UTC" firstStartedPulling="2026-04-17 23:30:38.231656493 +0000 UTC m=+6.946861347" lastFinishedPulling="2026-04-17 23:30:40.642403137 +0000 UTC m=+9.357607984" observedRunningTime="2026-04-17 23:30:41.476559193 +0000 UTC m=+10.191764049" watchObservedRunningTime="2026-04-17 23:30:44.489401102 +0000 UTC m=+13.204605961" Apr 17 23:30:45.932839 sudo[1630]: pam_unix(sudo:session): session closed for user root Apr 17 23:30:45.936683 sshd[1627]: pam_unix(sshd:session): session closed for user core Apr 17 23:30:45.942765 systemd[1]: sshd@6-10.0.0.28:22-10.0.0.1:51976.service: Deactivated successfully. Apr 17 23:30:45.945014 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:30:45.945547 systemd[1]: session-7.scope: Consumed 3.905s CPU time, 167.5M memory peak, 0B memory swap peak. Apr 17 23:30:45.948163 systemd-logind[1438]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:30:45.949975 systemd-logind[1438]: Removed session 7. Apr 17 23:30:47.473856 kubelet[2507]: E0417 23:30:47.473061 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:47.984642 systemd[1]: Created slice kubepods-besteffort-pod5082970f_0581_4524_a5e0_98e42af5a712.slice - libcontainer container kubepods-besteffort-pod5082970f_0581_4524_a5e0_98e42af5a712.slice. Apr 17 23:30:48.054633 systemd[1]: Created slice kubepods-besteffort-pod07579969_82c0_4417_bb8b_1b4785402c79.slice - libcontainer container kubepods-besteffort-pod07579969_82c0_4417_bb8b_1b4785402c79.slice. Apr 17 23:30:48.128130 kubelet[2507]: E0417 23:30:48.128002 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqpr" podUID="40dcf753-14d8-4454-adb8-95d51e1d49d9" Apr 17 23:30:48.146377 kubelet[2507]: I0417 23:30:48.146241 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/07579969-82c0-4417-bb8b-1b4785402c79-cni-net-dir\") pod \"calico-node-mbf6x\" (UID: \"07579969-82c0-4417-bb8b-1b4785402c79\") " pod="calico-system/calico-node-mbf6x" Apr 17 23:30:48.146377 kubelet[2507]: I0417 23:30:48.146280 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5082970f-0581-4524-a5e0-98e42af5a712-tigera-ca-bundle\") pod \"calico-typha-d4d8fb578-lkpvb\" (UID: \"5082970f-0581-4524-a5e0-98e42af5a712\") " pod="calico-system/calico-typha-d4d8fb578-lkpvb" Apr 17 23:30:48.146377 kubelet[2507]: I0417 23:30:48.146383 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99x9x\" (UniqueName: \"kubernetes.io/projected/5082970f-0581-4524-a5e0-98e42af5a712-kube-api-access-99x9x\") pod \"calico-typha-d4d8fb578-lkpvb\" (UID: \"5082970f-0581-4524-a5e0-98e42af5a712\") " pod="calico-system/calico-typha-d4d8fb578-lkpvb" Apr 17 23:30:48.146904 kubelet[2507]: I0417 23:30:48.146403 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/07579969-82c0-4417-bb8b-1b4785402c79-sys-fs\") pod \"calico-node-mbf6x\" (UID: \"07579969-82c0-4417-bb8b-1b4785402c79\") " pod="calico-system/calico-node-mbf6x" Apr 17 23:30:48.146904 kubelet[2507]: I0417 23:30:48.146418 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5082970f-0581-4524-a5e0-98e42af5a712-typha-certs\") pod \"calico-typha-d4d8fb578-lkpvb\" (UID: \"5082970f-0581-4524-a5e0-98e42af5a712\") " pod="calico-system/calico-typha-d4d8fb578-lkpvb" Apr 17 23:30:48.146904 kubelet[2507]: I0417 23:30:48.146428 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/07579969-82c0-4417-bb8b-1b4785402c79-cni-bin-dir\") pod \"calico-node-mbf6x\" (UID: \"07579969-82c0-4417-bb8b-1b4785402c79\") " pod="calico-system/calico-node-mbf6x" Apr 17 23:30:48.146904 kubelet[2507]: I0417 23:30:48.146438 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07579969-82c0-4417-bb8b-1b4785402c79-xtables-lock\") pod \"calico-node-mbf6x\" (UID: \"07579969-82c0-4417-bb8b-1b4785402c79\") " pod="calico-system/calico-node-mbf6x" Apr 17 23:30:48.146904 kubelet[2507]: I0417 23:30:48.146449 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/07579969-82c0-4417-bb8b-1b4785402c79-cni-log-dir\") pod \"calico-node-mbf6x\" (UID: \"07579969-82c0-4417-bb8b-1b4785402c79\") " pod="calico-system/calico-node-mbf6x" Apr 17 23:30:48.147021 kubelet[2507]: I0417 23:30:48.146459 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/07579969-82c0-4417-bb8b-1b4785402c79-flexvol-driver-host\") pod \"calico-node-mbf6x\" (UID: \"07579969-82c0-4417-bb8b-1b4785402c79\") " pod="calico-system/calico-node-mbf6x" Apr 17 23:30:48.147021 kubelet[2507]: I0417 23:30:48.146469 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07579969-82c0-4417-bb8b-1b4785402c79-tigera-ca-bundle\") pod \"calico-node-mbf6x\" (UID: \"07579969-82c0-4417-bb8b-1b4785402c79\") " pod="calico-system/calico-node-mbf6x" Apr 17 23:30:48.147021 kubelet[2507]: I0417 23:30:48.146478 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wspp\" (UniqueName: \"kubernetes.io/projected/07579969-82c0-4417-bb8b-1b4785402c79-kube-api-access-6wspp\") pod \"calico-node-mbf6x\" (UID: \"07579969-82c0-4417-bb8b-1b4785402c79\") " pod="calico-system/calico-node-mbf6x" Apr 17 23:30:48.147021 kubelet[2507]: I0417 23:30:48.146504 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/07579969-82c0-4417-bb8b-1b4785402c79-node-certs\") pod \"calico-node-mbf6x\" (UID: \"07579969-82c0-4417-bb8b-1b4785402c79\") " pod="calico-system/calico-node-mbf6x" Apr 17 23:30:48.147021 kubelet[2507]: I0417 23:30:48.146544 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/07579969-82c0-4417-bb8b-1b4785402c79-bpffs\") pod \"calico-node-mbf6x\" (UID: \"07579969-82c0-4417-bb8b-1b4785402c79\") " pod="calico-system/calico-node-mbf6x" Apr 17 23:30:48.147154 kubelet[2507]: I0417 23:30:48.146643 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07579969-82c0-4417-bb8b-1b4785402c79-lib-modules\") pod \"calico-node-mbf6x\" (UID: \"07579969-82c0-4417-bb8b-1b4785402c79\") " pod="calico-system/calico-node-mbf6x" Apr 17 23:30:48.147154 kubelet[2507]: I0417 23:30:48.146657 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/07579969-82c0-4417-bb8b-1b4785402c79-nodeproc\") pod \"calico-node-mbf6x\" (UID: \"07579969-82c0-4417-bb8b-1b4785402c79\") " pod="calico-system/calico-node-mbf6x" Apr 17 23:30:48.147154 kubelet[2507]: I0417 23:30:48.146676 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/07579969-82c0-4417-bb8b-1b4785402c79-var-lib-calico\") pod \"calico-node-mbf6x\" (UID: \"07579969-82c0-4417-bb8b-1b4785402c79\") " pod="calico-system/calico-node-mbf6x" Apr 17 23:30:48.147154 kubelet[2507]: I0417 23:30:48.146706 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/07579969-82c0-4417-bb8b-1b4785402c79-var-run-calico\") pod \"calico-node-mbf6x\" (UID: \"07579969-82c0-4417-bb8b-1b4785402c79\") " pod="calico-system/calico-node-mbf6x" Apr 17 23:30:48.147154 kubelet[2507]: I0417 23:30:48.146721 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/07579969-82c0-4417-bb8b-1b4785402c79-policysync\") pod \"calico-node-mbf6x\" (UID: \"07579969-82c0-4417-bb8b-1b4785402c79\") " pod="calico-system/calico-node-mbf6x" Apr 17 23:30:48.248373 kubelet[2507]: I0417 23:30:48.247678 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/40dcf753-14d8-4454-adb8-95d51e1d49d9-socket-dir\") pod \"csi-node-driver-ktqpr\" (UID: \"40dcf753-14d8-4454-adb8-95d51e1d49d9\") " pod="calico-system/csi-node-driver-ktqpr" Apr 17 23:30:48.248373 kubelet[2507]: I0417 23:30:48.247763 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/40dcf753-14d8-4454-adb8-95d51e1d49d9-registration-dir\") pod \"csi-node-driver-ktqpr\" (UID: \"40dcf753-14d8-4454-adb8-95d51e1d49d9\") " pod="calico-system/csi-node-driver-ktqpr" Apr 17 23:30:48.248373 kubelet[2507]: I0417 23:30:48.247784 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/40dcf753-14d8-4454-adb8-95d51e1d49d9-varrun\") pod \"csi-node-driver-ktqpr\" (UID: \"40dcf753-14d8-4454-adb8-95d51e1d49d9\") " pod="calico-system/csi-node-driver-ktqpr" Apr 17 23:30:48.248373 kubelet[2507]: I0417 23:30:48.247853 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9j8c\" (UniqueName: \"kubernetes.io/projected/40dcf753-14d8-4454-adb8-95d51e1d49d9-kube-api-access-t9j8c\") pod \"csi-node-driver-ktqpr\" (UID: \"40dcf753-14d8-4454-adb8-95d51e1d49d9\") " pod="calico-system/csi-node-driver-ktqpr" Apr 17 23:30:48.248373 kubelet[2507]: I0417 23:30:48.247879 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/40dcf753-14d8-4454-adb8-95d51e1d49d9-kubelet-dir\") pod \"csi-node-driver-ktqpr\" (UID: \"40dcf753-14d8-4454-adb8-95d51e1d49d9\") " pod="calico-system/csi-node-driver-ktqpr" Apr 17 23:30:48.255175 kubelet[2507]: E0417 23:30:48.253404 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.255175 kubelet[2507]: W0417 23:30:48.253421 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.255175 kubelet[2507]: E0417 23:30:48.253438 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.255175 kubelet[2507]: E0417 23:30:48.254098 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.255175 kubelet[2507]: W0417 23:30:48.254106 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.255175 kubelet[2507]: E0417 23:30:48.254117 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.257543 kubelet[2507]: E0417 23:30:48.257097 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.257543 kubelet[2507]: W0417 23:30:48.257115 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.257543 kubelet[2507]: E0417 23:30:48.257555 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.265613 kubelet[2507]: E0417 23:30:48.265497 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.265759 kubelet[2507]: W0417 23:30:48.265683 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.265759 kubelet[2507]: E0417 23:30:48.265705 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.268136 kubelet[2507]: E0417 23:30:48.268041 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.268136 kubelet[2507]: W0417 23:30:48.268085 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.268136 kubelet[2507]: E0417 23:30:48.268105 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.273217 kubelet[2507]: E0417 23:30:48.273174 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.273217 kubelet[2507]: W0417 23:30:48.273211 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.273370 kubelet[2507]: E0417 23:30:48.273230 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.273643 kubelet[2507]: E0417 23:30:48.273613 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.273643 kubelet[2507]: W0417 23:30:48.273620 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.273643 kubelet[2507]: E0417 23:30:48.273627 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.297628 kubelet[2507]: E0417 23:30:48.297431 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:48.298786 containerd[1455]: time="2026-04-17T23:30:48.298676232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d4d8fb578-lkpvb,Uid:5082970f-0581-4524-a5e0-98e42af5a712,Namespace:calico-system,Attempt:0,}" Apr 17 23:30:48.337251 containerd[1455]: time="2026-04-17T23:30:48.333379013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:30:48.337251 containerd[1455]: time="2026-04-17T23:30:48.333461282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:30:48.337251 containerd[1455]: time="2026-04-17T23:30:48.333470816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:48.337251 containerd[1455]: time="2026-04-17T23:30:48.333534868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:48.350088 kubelet[2507]: E0417 23:30:48.350027 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.350088 kubelet[2507]: W0417 23:30:48.350072 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.350088 kubelet[2507]: E0417 23:30:48.350092 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.350673 kubelet[2507]: E0417 23:30:48.350579 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.350673 kubelet[2507]: W0417 23:30:48.350669 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.350756 kubelet[2507]: E0417 23:30:48.350681 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.351106 kubelet[2507]: E0417 23:30:48.351064 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.351106 kubelet[2507]: W0417 23:30:48.351095 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.351106 kubelet[2507]: E0417 23:30:48.351103 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.351651 kubelet[2507]: E0417 23:30:48.351535 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.351651 kubelet[2507]: W0417 23:30:48.351566 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.351651 kubelet[2507]: E0417 23:30:48.351573 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.351964 kubelet[2507]: E0417 23:30:48.351930 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.351964 kubelet[2507]: W0417 23:30:48.351965 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.352015 kubelet[2507]: E0417 23:30:48.351973 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.352429 kubelet[2507]: E0417 23:30:48.352287 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.352429 kubelet[2507]: W0417 23:30:48.352384 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.352429 kubelet[2507]: E0417 23:30:48.352391 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.352704 kubelet[2507]: E0417 23:30:48.352661 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.352704 kubelet[2507]: W0417 23:30:48.352692 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.352704 kubelet[2507]: E0417 23:30:48.352699 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.352949 kubelet[2507]: E0417 23:30:48.352922 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.352974 kubelet[2507]: W0417 23:30:48.352949 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.352974 kubelet[2507]: E0417 23:30:48.352955 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.353253 kubelet[2507]: E0417 23:30:48.353226 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.353275 kubelet[2507]: W0417 23:30:48.353253 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.353275 kubelet[2507]: E0417 23:30:48.353259 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.353537 kubelet[2507]: E0417 23:30:48.353493 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.353537 kubelet[2507]: W0417 23:30:48.353524 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.353537 kubelet[2507]: E0417 23:30:48.353530 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.353814 kubelet[2507]: E0417 23:30:48.353737 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.353814 kubelet[2507]: W0417 23:30:48.353766 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.353814 kubelet[2507]: E0417 23:30:48.353785 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.356271 kubelet[2507]: E0417 23:30:48.354186 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.356271 kubelet[2507]: W0417 23:30:48.354194 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.356271 kubelet[2507]: E0417 23:30:48.354202 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.356271 kubelet[2507]: E0417 23:30:48.354785 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.356271 kubelet[2507]: W0417 23:30:48.354791 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.356271 kubelet[2507]: E0417 23:30:48.354797 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.356271 kubelet[2507]: E0417 23:30:48.355083 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.356271 kubelet[2507]: W0417 23:30:48.355088 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.356271 kubelet[2507]: E0417 23:30:48.355094 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.356271 kubelet[2507]: E0417 23:30:48.355554 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.356544 kubelet[2507]: W0417 23:30:48.355559 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.356544 kubelet[2507]: E0417 23:30:48.355564 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.356544 kubelet[2507]: E0417 23:30:48.355849 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.356544 kubelet[2507]: W0417 23:30:48.355855 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.356544 kubelet[2507]: E0417 23:30:48.355860 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.356773 kubelet[2507]: E0417 23:30:48.356737 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.356793 kubelet[2507]: W0417 23:30:48.356772 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.356793 kubelet[2507]: E0417 23:30:48.356783 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.357078 kubelet[2507]: E0417 23:30:48.357050 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.357102 kubelet[2507]: W0417 23:30:48.357079 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.357102 kubelet[2507]: E0417 23:30:48.357086 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.357669 kubelet[2507]: E0417 23:30:48.357657 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.357856 kubelet[2507]: W0417 23:30:48.357710 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.357856 kubelet[2507]: E0417 23:30:48.357724 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.358124 kubelet[2507]: E0417 23:30:48.358021 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.358124 kubelet[2507]: W0417 23:30:48.358028 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.358124 kubelet[2507]: E0417 23:30:48.358035 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.358664 kubelet[2507]: E0417 23:30:48.358633 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.358664 kubelet[2507]: W0417 23:30:48.358663 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.358912 kubelet[2507]: E0417 23:30:48.358673 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.359015 kubelet[2507]: E0417 23:30:48.358948 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.359015 kubelet[2507]: W0417 23:30:48.358982 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.359015 kubelet[2507]: E0417 23:30:48.358990 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.359399 kubelet[2507]: E0417 23:30:48.359364 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.359399 kubelet[2507]: W0417 23:30:48.359395 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.359450 kubelet[2507]: E0417 23:30:48.359402 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.359903 kubelet[2507]: E0417 23:30:48.359851 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.359903 kubelet[2507]: W0417 23:30:48.359882 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.359903 kubelet[2507]: E0417 23:30:48.359890 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.360871 kubelet[2507]: E0417 23:30:48.360806 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.361132 kubelet[2507]: W0417 23:30:48.361070 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.361241 kubelet[2507]: E0417 23:30:48.361111 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.363065 containerd[1455]: time="2026-04-17T23:30:48.363008615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mbf6x,Uid:07579969-82c0-4417-bb8b-1b4785402c79,Namespace:calico-system,Attempt:0,}" Apr 17 23:30:48.366716 systemd[1]: Started cri-containerd-c0191b923c066b4c788e17013422d4de5377284e25ebb8e901d86bb9f600c512.scope - libcontainer container c0191b923c066b4c788e17013422d4de5377284e25ebb8e901d86bb9f600c512. Apr 17 23:30:48.378629 kubelet[2507]: E0417 23:30:48.378384 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:48.378629 kubelet[2507]: W0417 23:30:48.378507 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:48.378629 kubelet[2507]: E0417 23:30:48.378531 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:48.399700 containerd[1455]: time="2026-04-17T23:30:48.399260284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:30:48.399700 containerd[1455]: time="2026-04-17T23:30:48.399439603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:30:48.399700 containerd[1455]: time="2026-04-17T23:30:48.399455851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:48.399700 containerd[1455]: time="2026-04-17T23:30:48.399522513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:30:48.422005 systemd[1]: Started cri-containerd-896e66d880a99a9863ed7196b30777725cd8916a790fa042efb56c721a20cb43.scope - libcontainer container 896e66d880a99a9863ed7196b30777725cd8916a790fa042efb56c721a20cb43. Apr 17 23:30:48.425244 containerd[1455]: time="2026-04-17T23:30:48.425178185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d4d8fb578-lkpvb,Uid:5082970f-0581-4524-a5e0-98e42af5a712,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0191b923c066b4c788e17013422d4de5377284e25ebb8e901d86bb9f600c512\"" Apr 17 23:30:48.426396 kubelet[2507]: E0417 23:30:48.426284 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:48.431379 containerd[1455]: time="2026-04-17T23:30:48.431223681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:30:48.451107 containerd[1455]: time="2026-04-17T23:30:48.450998151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mbf6x,Uid:07579969-82c0-4417-bb8b-1b4785402c79,Namespace:calico-system,Attempt:0,} returns sandbox id \"896e66d880a99a9863ed7196b30777725cd8916a790fa042efb56c721a20cb43\"" Apr 17 23:30:49.268205 systemd[1]: run-containerd-runc-k8s.io-c0191b923c066b4c788e17013422d4de5377284e25ebb8e901d86bb9f600c512-runc.hJsKie.mount: Deactivated successfully. Apr 17 23:30:49.432185 kubelet[2507]: E0417 23:30:49.431611 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqpr" podUID="40dcf753-14d8-4454-adb8-95d51e1d49d9" Apr 17 23:30:49.991247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4117503426.mount: Deactivated successfully. Apr 17 23:30:50.291095 kubelet[2507]: E0417 23:30:50.290773 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:50.389541 kubelet[2507]: E0417 23:30:50.389451 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.389541 kubelet[2507]: W0417 23:30:50.389492 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.389541 kubelet[2507]: E0417 23:30:50.389516 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.389952 kubelet[2507]: E0417 23:30:50.389919 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.389952 kubelet[2507]: W0417 23:30:50.389930 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.389952 kubelet[2507]: E0417 23:30:50.389941 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.390382 kubelet[2507]: E0417 23:30:50.390347 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.390382 kubelet[2507]: W0417 23:30:50.390381 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.390518 kubelet[2507]: E0417 23:30:50.390394 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.390832 kubelet[2507]: E0417 23:30:50.390803 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.390832 kubelet[2507]: W0417 23:30:50.390831 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.390918 kubelet[2507]: E0417 23:30:50.390842 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.391186 kubelet[2507]: E0417 23:30:50.391144 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.391250 kubelet[2507]: W0417 23:30:50.391186 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.391250 kubelet[2507]: E0417 23:30:50.391194 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.391814 kubelet[2507]: E0417 23:30:50.391781 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.391848 kubelet[2507]: W0417 23:30:50.391817 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.391848 kubelet[2507]: E0417 23:30:50.391829 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.392261 kubelet[2507]: E0417 23:30:50.392228 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.392261 kubelet[2507]: W0417 23:30:50.392256 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.392261 kubelet[2507]: E0417 23:30:50.392264 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.393583 kubelet[2507]: E0417 23:30:50.393542 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.393583 kubelet[2507]: W0417 23:30:50.393573 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.393583 kubelet[2507]: E0417 23:30:50.393583 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.394026 kubelet[2507]: E0417 23:30:50.393998 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.394026 kubelet[2507]: W0417 23:30:50.394026 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.394169 kubelet[2507]: E0417 23:30:50.394062 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.394503 kubelet[2507]: E0417 23:30:50.394472 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.394503 kubelet[2507]: W0417 23:30:50.394500 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.394570 kubelet[2507]: E0417 23:30:50.394509 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.394822 kubelet[2507]: E0417 23:30:50.394725 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.394822 kubelet[2507]: W0417 23:30:50.394757 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.394822 kubelet[2507]: E0417 23:30:50.394768 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.395068 kubelet[2507]: E0417 23:30:50.395036 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.395068 kubelet[2507]: W0417 23:30:50.395064 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.395144 kubelet[2507]: E0417 23:30:50.395071 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.395567 kubelet[2507]: E0417 23:30:50.395425 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.395742 kubelet[2507]: W0417 23:30:50.395635 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.395742 kubelet[2507]: E0417 23:30:50.395662 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.396220 kubelet[2507]: E0417 23:30:50.396087 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.396220 kubelet[2507]: W0417 23:30:50.396117 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.396220 kubelet[2507]: E0417 23:30:50.396126 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.396582 kubelet[2507]: E0417 23:30:50.396452 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.396582 kubelet[2507]: W0417 23:30:50.396477 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.396582 kubelet[2507]: E0417 23:30:50.396484 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.396834 kubelet[2507]: E0417 23:30:50.396801 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.396834 kubelet[2507]: W0417 23:30:50.396831 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.396933 kubelet[2507]: E0417 23:30:50.396841 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.397129 kubelet[2507]: E0417 23:30:50.397101 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.397129 kubelet[2507]: W0417 23:30:50.397128 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.397168 kubelet[2507]: E0417 23:30:50.397134 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.397480 kubelet[2507]: E0417 23:30:50.397452 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.397480 kubelet[2507]: W0417 23:30:50.397478 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.397529 kubelet[2507]: E0417 23:30:50.397484 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.397970 kubelet[2507]: E0417 23:30:50.397930 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.397970 kubelet[2507]: W0417 23:30:50.397960 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.397970 kubelet[2507]: E0417 23:30:50.397967 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.398413 kubelet[2507]: E0417 23:30:50.398265 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.398413 kubelet[2507]: W0417 23:30:50.398362 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.398413 kubelet[2507]: E0417 23:30:50.398370 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.398681 kubelet[2507]: E0417 23:30:50.398644 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.398681 kubelet[2507]: W0417 23:30:50.398671 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.398681 kubelet[2507]: E0417 23:30:50.398678 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.399040 kubelet[2507]: E0417 23:30:50.398911 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.399040 kubelet[2507]: W0417 23:30:50.398939 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.399040 kubelet[2507]: E0417 23:30:50.398944 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.399111 kubelet[2507]: E0417 23:30:50.399066 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.399111 kubelet[2507]: W0417 23:30:50.399070 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.399111 kubelet[2507]: E0417 23:30:50.399076 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.399492 kubelet[2507]: E0417 23:30:50.399461 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.399492 kubelet[2507]: W0417 23:30:50.399488 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.399492 kubelet[2507]: E0417 23:30:50.399493 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:50.399757 kubelet[2507]: E0417 23:30:50.399722 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:50.399757 kubelet[2507]: W0417 23:30:50.399747 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:50.399757 kubelet[2507]: E0417 23:30:50.399753 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:51.414724 kubelet[2507]: E0417 23:30:51.414567 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqpr" podUID="40dcf753-14d8-4454-adb8-95d51e1d49d9" Apr 17 23:30:52.312995 containerd[1455]: time="2026-04-17T23:30:52.312843376Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:52.313945 containerd[1455]: time="2026-04-17T23:30:52.313898148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 17 23:30:52.319213 containerd[1455]: time="2026-04-17T23:30:52.318605714Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:52.323030 containerd[1455]: time="2026-04-17T23:30:52.322965317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:52.324269 containerd[1455]: time="2026-04-17T23:30:52.324205447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.892923016s" Apr 17 23:30:52.324269 containerd[1455]: time="2026-04-17T23:30:52.324259218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 17 23:30:52.325883 containerd[1455]: time="2026-04-17T23:30:52.325662646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:30:52.346192 containerd[1455]: time="2026-04-17T23:30:52.346138600Z" level=info msg="CreateContainer within sandbox \"c0191b923c066b4c788e17013422d4de5377284e25ebb8e901d86bb9f600c512\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:30:52.366753 containerd[1455]: time="2026-04-17T23:30:52.366681816Z" level=info msg="CreateContainer within sandbox \"c0191b923c066b4c788e17013422d4de5377284e25ebb8e901d86bb9f600c512\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5bdf1d5b1864705889840e9ad5164d1e67fcf2c4f4334ffb9ad25340c5d1d9f9\"" Apr 17 23:30:52.367575 containerd[1455]: time="2026-04-17T23:30:52.367425321Z" level=info msg="StartContainer for \"5bdf1d5b1864705889840e9ad5164d1e67fcf2c4f4334ffb9ad25340c5d1d9f9\"" Apr 17 23:30:52.480951 systemd[1]: Started cri-containerd-5bdf1d5b1864705889840e9ad5164d1e67fcf2c4f4334ffb9ad25340c5d1d9f9.scope - libcontainer container 5bdf1d5b1864705889840e9ad5164d1e67fcf2c4f4334ffb9ad25340c5d1d9f9. Apr 17 23:30:52.525761 containerd[1455]: time="2026-04-17T23:30:52.525621348Z" level=info msg="StartContainer for \"5bdf1d5b1864705889840e9ad5164d1e67fcf2c4f4334ffb9ad25340c5d1d9f9\" returns successfully" Apr 17 23:30:53.414725 kubelet[2507]: E0417 23:30:53.414601 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqpr" podUID="40dcf753-14d8-4454-adb8-95d51e1d49d9" Apr 17 23:30:53.511349 kubelet[2507]: E0417 23:30:53.511270 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:53.529863 kubelet[2507]: I0417 23:30:53.529800 2507 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-d4d8fb578-lkpvb" podStartSLOduration=2.635119734 podStartE2EDuration="6.529787509s" podCreationTimestamp="2026-04-17 23:30:47 +0000 UTC" firstStartedPulling="2026-04-17 23:30:48.430731826 +0000 UTC m=+17.145936672" lastFinishedPulling="2026-04-17 23:30:52.325399601 +0000 UTC m=+21.040604447" observedRunningTime="2026-04-17 23:30:53.529652355 +0000 UTC m=+22.244857213" watchObservedRunningTime="2026-04-17 23:30:53.529787509 +0000 UTC m=+22.244992367" Apr 17 23:30:53.556900 kubelet[2507]: E0417 23:30:53.556816 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.556900 kubelet[2507]: W0417 23:30:53.556838 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.556900 kubelet[2507]: E0417 23:30:53.556866 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.557340 kubelet[2507]: E0417 23:30:53.557266 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.557375 kubelet[2507]: W0417 23:30:53.557369 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.557393 kubelet[2507]: E0417 23:30:53.557379 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.558085 kubelet[2507]: E0417 23:30:53.557944 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.558085 kubelet[2507]: W0417 23:30:53.557994 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.558085 kubelet[2507]: E0417 23:30:53.558011 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.558642 kubelet[2507]: E0417 23:30:53.558605 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.558642 kubelet[2507]: W0417 23:30:53.558637 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.558733 kubelet[2507]: E0417 23:30:53.558653 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.559137 kubelet[2507]: E0417 23:30:53.559013 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.559137 kubelet[2507]: W0417 23:30:53.559046 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.559137 kubelet[2507]: E0417 23:30:53.559057 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.559384 kubelet[2507]: E0417 23:30:53.559279 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.559411 kubelet[2507]: W0417 23:30:53.559401 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.559427 kubelet[2507]: E0417 23:30:53.559411 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.559677 kubelet[2507]: E0417 23:30:53.559648 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.559677 kubelet[2507]: W0417 23:30:53.559675 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.559753 kubelet[2507]: E0417 23:30:53.559681 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.559953 kubelet[2507]: E0417 23:30:53.559911 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.559953 kubelet[2507]: W0417 23:30:53.559942 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.559953 kubelet[2507]: E0417 23:30:53.559952 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.560407 kubelet[2507]: E0417 23:30:53.560174 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.560407 kubelet[2507]: W0417 23:30:53.560183 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.560407 kubelet[2507]: E0417 23:30:53.560192 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.560672 kubelet[2507]: E0417 23:30:53.560644 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.560672 kubelet[2507]: W0417 23:30:53.560654 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.560672 kubelet[2507]: E0417 23:30:53.560662 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.560953 kubelet[2507]: E0417 23:30:53.560889 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.561036 kubelet[2507]: W0417 23:30:53.561003 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.561056 kubelet[2507]: E0417 23:30:53.561038 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.561421 kubelet[2507]: E0417 23:30:53.561365 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.561421 kubelet[2507]: W0417 23:30:53.561373 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.561421 kubelet[2507]: E0417 23:30:53.561381 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.561699 kubelet[2507]: E0417 23:30:53.561633 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.561699 kubelet[2507]: W0417 23:30:53.561639 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.561699 kubelet[2507]: E0417 23:30:53.561645 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.561920 kubelet[2507]: E0417 23:30:53.561890 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.561920 kubelet[2507]: W0417 23:30:53.561919 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.561961 kubelet[2507]: E0417 23:30:53.561928 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.562399 kubelet[2507]: E0417 23:30:53.562369 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.562399 kubelet[2507]: W0417 23:30:53.562396 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.562442 kubelet[2507]: E0417 23:30:53.562404 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.644573 kubelet[2507]: E0417 23:30:53.644431 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.644573 kubelet[2507]: W0417 23:30:53.644475 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.644573 kubelet[2507]: E0417 23:30:53.644494 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.644876 kubelet[2507]: E0417 23:30:53.644793 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.644876 kubelet[2507]: W0417 23:30:53.644799 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.644876 kubelet[2507]: E0417 23:30:53.644805 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.645742 kubelet[2507]: E0417 23:30:53.645659 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.645742 kubelet[2507]: W0417 23:30:53.645688 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.645742 kubelet[2507]: E0417 23:30:53.645710 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.646374 kubelet[2507]: E0417 23:30:53.646191 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.646374 kubelet[2507]: W0417 23:30:53.646223 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.646374 kubelet[2507]: E0417 23:30:53.646264 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.646737 kubelet[2507]: E0417 23:30:53.646655 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.646737 kubelet[2507]: W0417 23:30:53.646685 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.646737 kubelet[2507]: E0417 23:30:53.646691 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.647353 kubelet[2507]: E0417 23:30:53.647159 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.647353 kubelet[2507]: W0417 23:30:53.647200 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.647353 kubelet[2507]: E0417 23:30:53.647224 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.647649 kubelet[2507]: E0417 23:30:53.647604 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.647649 kubelet[2507]: W0417 23:30:53.647637 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.647649 kubelet[2507]: E0417 23:30:53.647648 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.648269 kubelet[2507]: E0417 23:30:53.648066 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.648269 kubelet[2507]: W0417 23:30:53.648096 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.648269 kubelet[2507]: E0417 23:30:53.648103 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.648560 kubelet[2507]: E0417 23:30:53.648516 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.648560 kubelet[2507]: W0417 23:30:53.648544 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.648560 kubelet[2507]: E0417 23:30:53.648551 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.648938 kubelet[2507]: E0417 23:30:53.648890 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.648938 kubelet[2507]: W0417 23:30:53.648920 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.648938 kubelet[2507]: E0417 23:30:53.648927 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.649548 kubelet[2507]: E0417 23:30:53.649517 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.649548 kubelet[2507]: W0417 23:30:53.649548 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.649600 kubelet[2507]: E0417 23:30:53.649554 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.649894 kubelet[2507]: E0417 23:30:53.649786 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.649894 kubelet[2507]: W0417 23:30:53.649812 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.649894 kubelet[2507]: E0417 23:30:53.649818 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.650432 kubelet[2507]: E0417 23:30:53.650264 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.650432 kubelet[2507]: W0417 23:30:53.650366 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.650432 kubelet[2507]: E0417 23:30:53.650373 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.650870 kubelet[2507]: E0417 23:30:53.650818 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.650870 kubelet[2507]: W0417 23:30:53.650849 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.650870 kubelet[2507]: E0417 23:30:53.650855 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.651475 kubelet[2507]: E0417 23:30:53.651443 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.651475 kubelet[2507]: W0417 23:30:53.651473 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.651520 kubelet[2507]: E0417 23:30:53.651483 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.652121 kubelet[2507]: E0417 23:30:53.652056 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.652121 kubelet[2507]: W0417 23:30:53.652093 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.652121 kubelet[2507]: E0417 23:30:53.652113 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.652638 kubelet[2507]: E0417 23:30:53.652605 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.652663 kubelet[2507]: W0417 23:30:53.652638 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.652663 kubelet[2507]: E0417 23:30:53.652649 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.652988 kubelet[2507]: E0417 23:30:53.652956 2507 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:30:53.652988 kubelet[2507]: W0417 23:30:53.652984 2507 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:30:53.653051 kubelet[2507]: E0417 23:30:53.652991 2507 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:30:53.751088 update_engine[1439]: I20260417 23:30:53.748340 1439 update_attempter.cc:509] Updating boot flags... Apr 17 23:30:53.784609 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (3173) Apr 17 23:30:53.827670 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (3177) Apr 17 23:30:54.067162 containerd[1455]: time="2026-04-17T23:30:54.066838294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:54.068517 containerd[1455]: time="2026-04-17T23:30:54.068202681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 17 23:30:54.070265 containerd[1455]: time="2026-04-17T23:30:54.070102470Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:54.074912 containerd[1455]: time="2026-04-17T23:30:54.074808593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:30:54.075735 containerd[1455]: time="2026-04-17T23:30:54.075674489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.749983403s" Apr 17 23:30:54.075735 containerd[1455]: time="2026-04-17T23:30:54.075726718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 17 23:30:54.085754 containerd[1455]: time="2026-04-17T23:30:54.085694798Z" level=info msg="CreateContainer within sandbox \"896e66d880a99a9863ed7196b30777725cd8916a790fa042efb56c721a20cb43\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:30:54.109187 containerd[1455]: time="2026-04-17T23:30:54.109105151Z" level=info msg="CreateContainer within sandbox \"896e66d880a99a9863ed7196b30777725cd8916a790fa042efb56c721a20cb43\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3405edd64a10cbe702ed48bd23158c43b2e1597766265fa42c4d9b610b321597\"" Apr 17 23:30:54.110129 containerd[1455]: time="2026-04-17T23:30:54.110072009Z" level=info msg="StartContainer for \"3405edd64a10cbe702ed48bd23158c43b2e1597766265fa42c4d9b610b321597\"" Apr 17 23:30:54.144550 systemd[1]: Started cri-containerd-3405edd64a10cbe702ed48bd23158c43b2e1597766265fa42c4d9b610b321597.scope - libcontainer container 3405edd64a10cbe702ed48bd23158c43b2e1597766265fa42c4d9b610b321597. Apr 17 23:30:54.174221 containerd[1455]: time="2026-04-17T23:30:54.174079698Z" level=info msg="StartContainer for \"3405edd64a10cbe702ed48bd23158c43b2e1597766265fa42c4d9b610b321597\" returns successfully" Apr 17 23:30:54.187468 systemd[1]: cri-containerd-3405edd64a10cbe702ed48bd23158c43b2e1597766265fa42c4d9b610b321597.scope: Deactivated successfully. Apr 17 23:30:54.216664 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3405edd64a10cbe702ed48bd23158c43b2e1597766265fa42c4d9b610b321597-rootfs.mount: Deactivated successfully. Apr 17 23:30:54.299551 containerd[1455]: time="2026-04-17T23:30:54.296816637Z" level=info msg="shim disconnected" id=3405edd64a10cbe702ed48bd23158c43b2e1597766265fa42c4d9b610b321597 namespace=k8s.io Apr 17 23:30:54.299551 containerd[1455]: time="2026-04-17T23:30:54.299518135Z" level=warning msg="cleaning up after shim disconnected" id=3405edd64a10cbe702ed48bd23158c43b2e1597766265fa42c4d9b610b321597 namespace=k8s.io Apr 17 23:30:54.299551 containerd[1455]: time="2026-04-17T23:30:54.299530183Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:30:54.516832 kubelet[2507]: I0417 23:30:54.516640 2507 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:30:54.517138 kubelet[2507]: E0417 23:30:54.516959 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:30:54.519445 containerd[1455]: time="2026-04-17T23:30:54.518411323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:30:55.415732 kubelet[2507]: E0417 23:30:55.415524 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqpr" podUID="40dcf753-14d8-4454-adb8-95d51e1d49d9" Apr 17 23:30:57.414865 kubelet[2507]: E0417 23:30:57.414724 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqpr" podUID="40dcf753-14d8-4454-adb8-95d51e1d49d9" Apr 17 23:30:59.415007 kubelet[2507]: E0417 23:30:59.414933 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqpr" podUID="40dcf753-14d8-4454-adb8-95d51e1d49d9" Apr 17 23:31:01.414756 kubelet[2507]: E0417 23:31:01.414647 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqpr" podUID="40dcf753-14d8-4454-adb8-95d51e1d49d9" Apr 17 23:31:03.124509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1407999088.mount: Deactivated successfully. Apr 17 23:31:03.253390 containerd[1455]: time="2026-04-17T23:31:03.253215633Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 17 23:31:03.257802 containerd[1455]: time="2026-04-17T23:31:03.257635183Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 8.73918064s" Apr 17 23:31:03.257802 containerd[1455]: time="2026-04-17T23:31:03.257694963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 17 23:31:03.262379 containerd[1455]: time="2026-04-17T23:31:03.262173853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:03.263227 containerd[1455]: time="2026-04-17T23:31:03.262967614Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:03.263829 containerd[1455]: time="2026-04-17T23:31:03.263761543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:03.280772 containerd[1455]: time="2026-04-17T23:31:03.280693306Z" level=info msg="CreateContainer within sandbox \"896e66d880a99a9863ed7196b30777725cd8916a790fa042efb56c721a20cb43\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:31:03.298453 containerd[1455]: time="2026-04-17T23:31:03.298265826Z" level=info msg="CreateContainer within sandbox \"896e66d880a99a9863ed7196b30777725cd8916a790fa042efb56c721a20cb43\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"31bbf498ed5896df6dbe5e9b17259bfd6f0b96b11c765e78b1190e18fa64e336\"" Apr 17 23:31:03.299114 containerd[1455]: time="2026-04-17T23:31:03.299094570Z" level=info msg="StartContainer for \"31bbf498ed5896df6dbe5e9b17259bfd6f0b96b11c765e78b1190e18fa64e336\"" Apr 17 23:31:03.368667 systemd[1]: Started cri-containerd-31bbf498ed5896df6dbe5e9b17259bfd6f0b96b11c765e78b1190e18fa64e336.scope - libcontainer container 31bbf498ed5896df6dbe5e9b17259bfd6f0b96b11c765e78b1190e18fa64e336. Apr 17 23:31:03.435093 kubelet[2507]: E0417 23:31:03.434919 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqpr" podUID="40dcf753-14d8-4454-adb8-95d51e1d49d9" Apr 17 23:31:03.465460 containerd[1455]: time="2026-04-17T23:31:03.465421004Z" level=info msg="StartContainer for \"31bbf498ed5896df6dbe5e9b17259bfd6f0b96b11c765e78b1190e18fa64e336\" returns successfully" Apr 17 23:31:03.501860 systemd[1]: cri-containerd-31bbf498ed5896df6dbe5e9b17259bfd6f0b96b11c765e78b1190e18fa64e336.scope: Deactivated successfully. Apr 17 23:31:03.592047 containerd[1455]: time="2026-04-17T23:31:03.591887091Z" level=info msg="shim disconnected" id=31bbf498ed5896df6dbe5e9b17259bfd6f0b96b11c765e78b1190e18fa64e336 namespace=k8s.io Apr 17 23:31:03.592047 containerd[1455]: time="2026-04-17T23:31:03.591967137Z" level=warning msg="cleaning up after shim disconnected" id=31bbf498ed5896df6dbe5e9b17259bfd6f0b96b11c765e78b1190e18fa64e336 namespace=k8s.io Apr 17 23:31:03.592047 containerd[1455]: time="2026-04-17T23:31:03.591974936Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:31:04.124934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-31bbf498ed5896df6dbe5e9b17259bfd6f0b96b11c765e78b1190e18fa64e336-rootfs.mount: Deactivated successfully. Apr 17 23:31:04.572530 containerd[1455]: time="2026-04-17T23:31:04.572469675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:31:05.414598 kubelet[2507]: E0417 23:31:05.414517 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqpr" podUID="40dcf753-14d8-4454-adb8-95d51e1d49d9" Apr 17 23:31:07.415065 kubelet[2507]: E0417 23:31:07.414881 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqpr" podUID="40dcf753-14d8-4454-adb8-95d51e1d49d9" Apr 17 23:31:09.130949 systemd[1]: Started sshd@7-10.0.0.28:22-10.0.0.1:58152.service - OpenSSH per-connection server daemon (10.0.0.1:58152). Apr 17 23:31:09.179992 sshd[3337]: Accepted publickey for core from 10.0.0.1 port 58152 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:09.182160 sshd[3337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:09.189623 systemd-logind[1438]: New session 8 of user core. Apr 17 23:31:09.199032 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:31:09.415958 kubelet[2507]: E0417 23:31:09.415208 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-ktqpr" podUID="40dcf753-14d8-4454-adb8-95d51e1d49d9" Apr 17 23:31:09.429865 sshd[3337]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:09.435807 systemd[1]: sshd@7-10.0.0.28:22-10.0.0.1:58152.service: Deactivated successfully. Apr 17 23:31:09.437723 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:31:09.440083 systemd-logind[1438]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:31:09.442046 systemd-logind[1438]: Removed session 8. Apr 17 23:31:09.715160 containerd[1455]: time="2026-04-17T23:31:09.714943453Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:09.715876 containerd[1455]: time="2026-04-17T23:31:09.715841923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 17 23:31:09.717280 containerd[1455]: time="2026-04-17T23:31:09.717225349Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:09.720250 containerd[1455]: time="2026-04-17T23:31:09.720191146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:09.721271 containerd[1455]: time="2026-04-17T23:31:09.721223450Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 5.148717522s" Apr 17 23:31:09.721435 containerd[1455]: time="2026-04-17T23:31:09.721271946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 17 23:31:09.729517 containerd[1455]: time="2026-04-17T23:31:09.729448680Z" level=info msg="CreateContainer within sandbox \"896e66d880a99a9863ed7196b30777725cd8916a790fa042efb56c721a20cb43\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:31:09.765067 containerd[1455]: time="2026-04-17T23:31:09.764879452Z" level=info msg="CreateContainer within sandbox \"896e66d880a99a9863ed7196b30777725cd8916a790fa042efb56c721a20cb43\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cdf05d989f3c41c7f5bf06e77ef4fbf645889e51fe994ebd358949e534cef9e8\"" Apr 17 23:31:09.766228 containerd[1455]: time="2026-04-17T23:31:09.766179554Z" level=info msg="StartContainer for \"cdf05d989f3c41c7f5bf06e77ef4fbf645889e51fe994ebd358949e534cef9e8\"" Apr 17 23:31:09.810890 systemd[1]: Started cri-containerd-cdf05d989f3c41c7f5bf06e77ef4fbf645889e51fe994ebd358949e534cef9e8.scope - libcontainer container cdf05d989f3c41c7f5bf06e77ef4fbf645889e51fe994ebd358949e534cef9e8. Apr 17 23:31:09.862866 containerd[1455]: time="2026-04-17T23:31:09.862721603Z" level=info msg="StartContainer for \"cdf05d989f3c41c7f5bf06e77ef4fbf645889e51fe994ebd358949e534cef9e8\" returns successfully" Apr 17 23:31:10.515793 systemd[1]: cri-containerd-cdf05d989f3c41c7f5bf06e77ef4fbf645889e51fe994ebd358949e534cef9e8.scope: Deactivated successfully. Apr 17 23:31:10.544028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdf05d989f3c41c7f5bf06e77ef4fbf645889e51fe994ebd358949e534cef9e8-rootfs.mount: Deactivated successfully. Apr 17 23:31:10.547191 containerd[1455]: time="2026-04-17T23:31:10.546983938Z" level=info msg="shim disconnected" id=cdf05d989f3c41c7f5bf06e77ef4fbf645889e51fe994ebd358949e534cef9e8 namespace=k8s.io Apr 17 23:31:10.547191 containerd[1455]: time="2026-04-17T23:31:10.547063374Z" level=warning msg="cleaning up after shim disconnected" id=cdf05d989f3c41c7f5bf06e77ef4fbf645889e51fe994ebd358949e534cef9e8 namespace=k8s.io Apr 17 23:31:10.547191 containerd[1455]: time="2026-04-17T23:31:10.547072328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:31:10.574918 kubelet[2507]: I0417 23:31:10.574815 2507 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 17 23:31:10.639871 systemd[1]: Created slice kubepods-burstable-pod82e80704_4770_4915_a706_1207ff1fb60b.slice - libcontainer container kubepods-burstable-pod82e80704_4770_4915_a706_1207ff1fb60b.slice. Apr 17 23:31:10.642044 containerd[1455]: time="2026-04-17T23:31:10.641487679Z" level=info msg="CreateContainer within sandbox \"896e66d880a99a9863ed7196b30777725cd8916a790fa042efb56c721a20cb43\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:31:10.653025 systemd[1]: Created slice kubepods-besteffort-podcd34f5a7_9bf4_4007_8c3f_3be1f839a4eb.slice - libcontainer container kubepods-besteffort-podcd34f5a7_9bf4_4007_8c3f_3be1f839a4eb.slice. Apr 17 23:31:10.664805 systemd[1]: Created slice kubepods-besteffort-pod6a5bfb7f_45ca_40c7_b4cf_b11fe52150a4.slice - libcontainer container kubepods-besteffort-pod6a5bfb7f_45ca_40c7_b4cf_b11fe52150a4.slice. Apr 17 23:31:10.665904 kubelet[2507]: I0417 23:31:10.665851 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/82e80704-4770-4915-a706-1207ff1fb60b-config-volume\") pod \"coredns-7d764666f9-c5kzw\" (UID: \"82e80704-4770-4915-a706-1207ff1fb60b\") " pod="kube-system/coredns-7d764666f9-c5kzw" Apr 17 23:31:10.665904 kubelet[2507]: I0417 23:31:10.665879 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-whisker-ca-bundle\") pod \"whisker-7f79f9c6d-x769v\" (UID: \"6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4\") " pod="calico-system/whisker-7f79f9c6d-x769v" Apr 17 23:31:10.666035 kubelet[2507]: I0417 23:31:10.665914 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7293e6d6-a88e-43ea-b7ee-a4239f4cde4f-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-6m64d\" (UID: \"7293e6d6-a88e-43ea-b7ee-a4239f4cde4f\") " pod="calico-system/goldmane-9f7667bb8-6m64d" Apr 17 23:31:10.666035 kubelet[2507]: I0417 23:31:10.665928 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cd34f5a7-9bf4-4007-8c3f-3be1f839a4eb-calico-apiserver-certs\") pod \"calico-apiserver-5f84d7f489-rm5t6\" (UID: \"cd34f5a7-9bf4-4007-8c3f-3be1f839a4eb\") " pod="calico-system/calico-apiserver-5f84d7f489-rm5t6" Apr 17 23:31:10.666035 kubelet[2507]: I0417 23:31:10.665940 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7jzg\" (UniqueName: \"kubernetes.io/projected/cd34f5a7-9bf4-4007-8c3f-3be1f839a4eb-kube-api-access-l7jzg\") pod \"calico-apiserver-5f84d7f489-rm5t6\" (UID: \"cd34f5a7-9bf4-4007-8c3f-3be1f839a4eb\") " pod="calico-system/calico-apiserver-5f84d7f489-rm5t6" Apr 17 23:31:10.666035 kubelet[2507]: I0417 23:31:10.665954 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-nginx-config\") pod \"whisker-7f79f9c6d-x769v\" (UID: \"6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4\") " pod="calico-system/whisker-7f79f9c6d-x769v" Apr 17 23:31:10.666035 kubelet[2507]: I0417 23:31:10.665965 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7293e6d6-a88e-43ea-b7ee-a4239f4cde4f-goldmane-key-pair\") pod \"goldmane-9f7667bb8-6m64d\" (UID: \"7293e6d6-a88e-43ea-b7ee-a4239f4cde4f\") " pod="calico-system/goldmane-9f7667bb8-6m64d" Apr 17 23:31:10.666125 kubelet[2507]: I0417 23:31:10.665979 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsxh2\" (UniqueName: \"kubernetes.io/projected/3b7da14d-a2f0-4dfd-9ad4-fcbdebe97c4f-kube-api-access-zsxh2\") pod \"calico-kube-controllers-857576c76b-6b74d\" (UID: \"3b7da14d-a2f0-4dfd-9ad4-fcbdebe97c4f\") " pod="calico-system/calico-kube-controllers-857576c76b-6b74d" Apr 17 23:31:10.666125 kubelet[2507]: I0417 23:31:10.666000 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3b7da14d-a2f0-4dfd-9ad4-fcbdebe97c4f-tigera-ca-bundle\") pod \"calico-kube-controllers-857576c76b-6b74d\" (UID: \"3b7da14d-a2f0-4dfd-9ad4-fcbdebe97c4f\") " pod="calico-system/calico-kube-controllers-857576c76b-6b74d" Apr 17 23:31:10.666125 kubelet[2507]: I0417 23:31:10.666038 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87vwf\" (UniqueName: \"kubernetes.io/projected/82e80704-4770-4915-a706-1207ff1fb60b-kube-api-access-87vwf\") pod \"coredns-7d764666f9-c5kzw\" (UID: \"82e80704-4770-4915-a706-1207ff1fb60b\") " pod="kube-system/coredns-7d764666f9-c5kzw" Apr 17 23:31:10.666125 kubelet[2507]: I0417 23:31:10.666050 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-whisker-backend-key-pair\") pod \"whisker-7f79f9c6d-x769v\" (UID: \"6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4\") " pod="calico-system/whisker-7f79f9c6d-x769v" Apr 17 23:31:10.666125 kubelet[2507]: I0417 23:31:10.666060 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6vbv\" (UniqueName: \"kubernetes.io/projected/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-kube-api-access-t6vbv\") pod \"whisker-7f79f9c6d-x769v\" (UID: \"6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4\") " pod="calico-system/whisker-7f79f9c6d-x769v" Apr 17 23:31:10.666206 kubelet[2507]: I0417 23:31:10.666072 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkxcd\" (UniqueName: \"kubernetes.io/projected/7293e6d6-a88e-43ea-b7ee-a4239f4cde4f-kube-api-access-wkxcd\") pod \"goldmane-9f7667bb8-6m64d\" (UID: \"7293e6d6-a88e-43ea-b7ee-a4239f4cde4f\") " pod="calico-system/goldmane-9f7667bb8-6m64d" Apr 17 23:31:10.666206 kubelet[2507]: I0417 23:31:10.666085 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7293e6d6-a88e-43ea-b7ee-a4239f4cde4f-config\") pod \"goldmane-9f7667bb8-6m64d\" (UID: \"7293e6d6-a88e-43ea-b7ee-a4239f4cde4f\") " pod="calico-system/goldmane-9f7667bb8-6m64d" Apr 17 23:31:10.677692 systemd[1]: Created slice kubepods-besteffort-pod3b7da14d_a2f0_4dfd_9ad4_fcbdebe97c4f.slice - libcontainer container kubepods-besteffort-pod3b7da14d_a2f0_4dfd_9ad4_fcbdebe97c4f.slice. Apr 17 23:31:10.687842 systemd[1]: Created slice kubepods-besteffort-pod7293e6d6_a88e_43ea_b7ee_a4239f4cde4f.slice - libcontainer container kubepods-besteffort-pod7293e6d6_a88e_43ea_b7ee_a4239f4cde4f.slice. Apr 17 23:31:10.696063 systemd[1]: Created slice kubepods-besteffort-pod3364d307_610e_4ec3_b6b8_9438cae6845a.slice - libcontainer container kubepods-besteffort-pod3364d307_610e_4ec3_b6b8_9438cae6845a.slice. Apr 17 23:31:10.705983 containerd[1455]: time="2026-04-17T23:31:10.705740825Z" level=info msg="CreateContainer within sandbox \"896e66d880a99a9863ed7196b30777725cd8916a790fa042efb56c721a20cb43\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bd4100f57954310657c71ff48798738b2564b17874de9a5258fbfb70a342b049\"" Apr 17 23:31:10.706078 systemd[1]: Created slice kubepods-burstable-pod3c3c65d0_eb16_4067_b7da_ac4edc1b0e1b.slice - libcontainer container kubepods-burstable-pod3c3c65d0_eb16_4067_b7da_ac4edc1b0e1b.slice. Apr 17 23:31:10.706856 containerd[1455]: time="2026-04-17T23:31:10.706603168Z" level=info msg="StartContainer for \"bd4100f57954310657c71ff48798738b2564b17874de9a5258fbfb70a342b049\"" Apr 17 23:31:10.748682 systemd[1]: Started cri-containerd-bd4100f57954310657c71ff48798738b2564b17874de9a5258fbfb70a342b049.scope - libcontainer container bd4100f57954310657c71ff48798738b2564b17874de9a5258fbfb70a342b049. Apr 17 23:31:10.766649 kubelet[2507]: I0417 23:31:10.766528 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c3c65d0-eb16-4067-b7da-ac4edc1b0e1b-config-volume\") pod \"coredns-7d764666f9-rnp9h\" (UID: \"3c3c65d0-eb16-4067-b7da-ac4edc1b0e1b\") " pod="kube-system/coredns-7d764666f9-rnp9h" Apr 17 23:31:10.768605 kubelet[2507]: I0417 23:31:10.766837 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmlnf\" (UniqueName: \"kubernetes.io/projected/3364d307-610e-4ec3-b6b8-9438cae6845a-kube-api-access-zmlnf\") pod \"calico-apiserver-5f84d7f489-kwq9q\" (UID: \"3364d307-610e-4ec3-b6b8-9438cae6845a\") " pod="calico-system/calico-apiserver-5f84d7f489-kwq9q" Apr 17 23:31:10.768605 kubelet[2507]: I0417 23:31:10.766856 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2pj2\" (UniqueName: \"kubernetes.io/projected/3c3c65d0-eb16-4067-b7da-ac4edc1b0e1b-kube-api-access-k2pj2\") pod \"coredns-7d764666f9-rnp9h\" (UID: \"3c3c65d0-eb16-4067-b7da-ac4edc1b0e1b\") " pod="kube-system/coredns-7d764666f9-rnp9h" Apr 17 23:31:10.768605 kubelet[2507]: I0417 23:31:10.766916 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3364d307-610e-4ec3-b6b8-9438cae6845a-calico-apiserver-certs\") pod \"calico-apiserver-5f84d7f489-kwq9q\" (UID: \"3364d307-610e-4ec3-b6b8-9438cae6845a\") " pod="calico-system/calico-apiserver-5f84d7f489-kwq9q" Apr 17 23:31:10.815146 containerd[1455]: time="2026-04-17T23:31:10.814884942Z" level=info msg="StartContainer for \"bd4100f57954310657c71ff48798738b2564b17874de9a5258fbfb70a342b049\" returns successfully" Apr 17 23:31:10.957856 kubelet[2507]: E0417 23:31:10.956937 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:10.959966 containerd[1455]: time="2026-04-17T23:31:10.959860875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c5kzw,Uid:82e80704-4770-4915-a706-1207ff1fb60b,Namespace:kube-system,Attempt:0,}" Apr 17 23:31:10.963932 containerd[1455]: time="2026-04-17T23:31:10.963023494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f84d7f489-rm5t6,Uid:cd34f5a7-9bf4-4007-8c3f-3be1f839a4eb,Namespace:calico-system,Attempt:0,}" Apr 17 23:31:10.984054 containerd[1455]: time="2026-04-17T23:31:10.983800308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f79f9c6d-x769v,Uid:6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4,Namespace:calico-system,Attempt:0,}" Apr 17 23:31:10.987248 containerd[1455]: time="2026-04-17T23:31:10.987113813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-857576c76b-6b74d,Uid:3b7da14d-a2f0-4dfd-9ad4-fcbdebe97c4f,Namespace:calico-system,Attempt:0,}" Apr 17 23:31:11.003488 containerd[1455]: time="2026-04-17T23:31:11.003101910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-6m64d,Uid:7293e6d6-a88e-43ea-b7ee-a4239f4cde4f,Namespace:calico-system,Attempt:0,}" Apr 17 23:31:11.013054 containerd[1455]: time="2026-04-17T23:31:11.012964998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f84d7f489-kwq9q,Uid:3364d307-610e-4ec3-b6b8-9438cae6845a,Namespace:calico-system,Attempt:0,}" Apr 17 23:31:11.013904 kubelet[2507]: E0417 23:31:11.012777 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:11.015838 containerd[1455]: time="2026-04-17T23:31:11.015673786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-rnp9h,Uid:3c3c65d0-eb16-4067-b7da-ac4edc1b0e1b,Namespace:kube-system,Attempt:0,}" Apr 17 23:31:11.431953 systemd[1]: Created slice kubepods-besteffort-pod40dcf753_14d8_4454_adb8_95d51e1d49d9.slice - libcontainer container kubepods-besteffort-pod40dcf753_14d8_4454_adb8_95d51e1d49d9.slice. Apr 17 23:31:11.466471 containerd[1455]: time="2026-04-17T23:31:11.465522992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ktqpr,Uid:40dcf753-14d8-4454-adb8-95d51e1d49d9,Namespace:calico-system,Attempt:0,}" Apr 17 23:31:11.839726 systemd-networkd[1380]: calicedac9e39d5: Link UP Apr 17 23:31:11.840856 systemd-networkd[1380]: calicedac9e39d5: Gained carrier Apr 17 23:31:11.846098 containerd[1455]: 2026-04-17 23:31:11.392 [INFO][3614] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2ba86e583f41b164e73970a7887b9d528b140d6688b1acadc3c71de19a854c98" Apr 17 23:31:11.846098 containerd[1455]: 2026-04-17 23:31:11.393 [INFO][3614] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2ba86e583f41b164e73970a7887b9d528b140d6688b1acadc3c71de19a854c98" iface="eth0" netns="/var/run/netns/cni-307c2f27-4aa3-0f67-da64-f4d1366dcbed" Apr 17 23:31:11.846098 containerd[1455]: 2026-04-17 23:31:11.394 [INFO][3614] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2ba86e583f41b164e73970a7887b9d528b140d6688b1acadc3c71de19a854c98" iface="eth0" netns="/var/run/netns/cni-307c2f27-4aa3-0f67-da64-f4d1366dcbed" Apr 17 23:31:11.846098 containerd[1455]: 2026-04-17 23:31:11.394 [INFO][3614] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2ba86e583f41b164e73970a7887b9d528b140d6688b1acadc3c71de19a854c98" iface="eth0" netns="/var/run/netns/cni-307c2f27-4aa3-0f67-da64-f4d1366dcbed" Apr 17 23:31:11.846098 containerd[1455]: 2026-04-17 23:31:11.394 [INFO][3614] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2ba86e583f41b164e73970a7887b9d528b140d6688b1acadc3c71de19a854c98" Apr 17 23:31:11.846098 containerd[1455]: 2026-04-17 23:31:11.395 [INFO][3614] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2ba86e583f41b164e73970a7887b9d528b140d6688b1acadc3c71de19a854c98" Apr 17 23:31:11.846098 containerd[1455]: 2026-04-17 23:31:11.530 [INFO][3698] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2ba86e583f41b164e73970a7887b9d528b140d6688b1acadc3c71de19a854c98" HandleID="k8s-pod-network.2ba86e583f41b164e73970a7887b9d528b140d6688b1acadc3c71de19a854c98" Workload="localhost-k8s-calico--apiserver--5f84d7f489--kwq9q-eth0" Apr 17 23:31:11.846098 containerd[1455]: 2026-04-17 23:31:11.530 [INFO][3698] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:31:11.846098 containerd[1455]: 2026-04-17 23:31:11.811 [INFO][3698] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:31:11.846098 containerd[1455]: 2026-04-17 23:31:11.829 [WARNING][3698] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2ba86e583f41b164e73970a7887b9d528b140d6688b1acadc3c71de19a854c98" HandleID="k8s-pod-network.2ba86e583f41b164e73970a7887b9d528b140d6688b1acadc3c71de19a854c98" Workload="localhost-k8s-calico--apiserver--5f84d7f489--kwq9q-eth0" Apr 17 23:31:11.846098 containerd[1455]: 2026-04-17 23:31:11.832 [INFO][3698] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2ba86e583f41b164e73970a7887b9d528b140d6688b1acadc3c71de19a854c98" HandleID="k8s-pod-network.2ba86e583f41b164e73970a7887b9d528b140d6688b1acadc3c71de19a854c98" Workload="localhost-k8s-calico--apiserver--5f84d7f489--kwq9q-eth0" Apr 17 23:31:11.846098 containerd[1455]: 2026-04-17 23:31:11.837 [INFO][3698] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:31:11.846098 containerd[1455]: 2026-04-17 23:31:11.843 [INFO][3614] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2ba86e583f41b164e73970a7887b9d528b140d6688b1acadc3c71de19a854c98" Apr 17 23:31:11.848634 systemd[1]: run-netns-cni\x2d307c2f27\x2d4aa3\x2d0f67\x2dda64\x2df4d1366dcbed.mount: Deactivated successfully. Apr 17 23:31:11.852233 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ba86e583f41b164e73970a7887b9d528b140d6688b1acadc3c71de19a854c98-shm.mount: Deactivated successfully. Apr 17 23:31:11.857911 containerd[1455]: time="2026-04-17T23:31:11.856531803Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f84d7f489-kwq9q,Uid:3364d307-610e-4ec3-b6b8-9438cae6845a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2ba86e583f41b164e73970a7887b9d528b140d6688b1acadc3c71de19a854c98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:31:11.861574 kubelet[2507]: I0417 23:31:11.859947 2507 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-mbf6x" podStartSLOduration=1.703277891 podStartE2EDuration="23.859932438s" podCreationTimestamp="2026-04-17 23:30:48 +0000 UTC" firstStartedPulling="2026-04-17 23:30:48.453127636 +0000 UTC m=+17.168332482" lastFinishedPulling="2026-04-17 23:31:10.609782179 +0000 UTC m=+39.324987029" observedRunningTime="2026-04-17 23:31:11.688519784 +0000 UTC m=+40.403724642" watchObservedRunningTime="2026-04-17 23:31:11.859932438 +0000 UTC m=+40.575137284" Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.268 [ERROR][3508] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.342 [INFO][3508] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--857576c76b--6b74d-eth0 calico-kube-controllers-857576c76b- calico-system 3b7da14d-a2f0-4dfd-9ad4-fcbdebe97c4f 947 0 2026-04-17 23:30:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:857576c76b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-857576c76b-6b74d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calicedac9e39d5 [] [] }} ContainerID="90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948" Namespace="calico-system" Pod="calico-kube-controllers-857576c76b-6b74d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--857576c76b--6b74d-" Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.343 [INFO][3508] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948" Namespace="calico-system" Pod="calico-kube-controllers-857576c76b-6b74d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--857576c76b--6b74d-eth0" Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.470 [INFO][3689] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948" HandleID="k8s-pod-network.90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948" Workload="localhost-k8s-calico--kube--controllers--857576c76b--6b74d-eth0" Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.508 [INFO][3689] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948" HandleID="k8s-pod-network.90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948" Workload="localhost-k8s-calico--kube--controllers--857576c76b--6b74d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-857576c76b-6b74d", "timestamp":"2026-04-17 23:31:11.470553687 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001982c0)} Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.508 [INFO][3689] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.508 [INFO][3689] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.508 [INFO][3689] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.533 [INFO][3689] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948" host="localhost" Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.547 [INFO][3689] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.623 [INFO][3689] ipam/ipam.go 558: Ran out of existing affine blocks for host host="localhost" Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.647 [INFO][3689] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="localhost" Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.670 [INFO][3689] ipam/ipam_block_reader_writer.go 158: Found free block: 192.168.88.128/26 Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.670 [INFO][3689] ipam/ipam.go 588: Found unclaimed block in 22.810053ms host="localhost" subnet=192.168.88.128/26 Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.670 [INFO][3689] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="localhost" subnet=192.168.88.128/26 Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.702 [INFO][3689] ipam/ipam_block_reader_writer.go 205: Successfully created pending affinity for block host="localhost" subnet=192.168.88.128/26 Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.702 [INFO][3689] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.707 [INFO][3689] ipam/ipam.go 165: The referenced block doesn't exist, trying to create it cidr=192.168.88.128/26 host="localhost" Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.716 [INFO][3689] ipam/ipam.go 172: Wrote affinity as pending cidr=192.168.88.128/26 host="localhost" Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.719 [INFO][3689] ipam/ipam.go 181: Attempting to claim the block cidr=192.168.88.128/26 host="localhost" Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.719 [INFO][3689] ipam/ipam_block_reader_writer.go 226: Attempting to create a new block affinityType="host" host="localhost" subnet=192.168.88.128/26 Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.735 [INFO][3689] ipam/ipam_block_reader_writer.go 267: Successfully created block Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.735 [INFO][3689] ipam/ipam_block_reader_writer.go 283: Confirming affinity host="localhost" subnet=192.168.88.128/26 Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.751 [INFO][3689] ipam/ipam_block_reader_writer.go 298: Successfully confirmed affinity host="localhost" subnet=192.168.88.128/26 Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.751 [INFO][3689] ipam/ipam.go 623: Block '192.168.88.128/26' has 64 free ips which is more than 1 ips required. host="localhost" subnet=192.168.88.128/26 Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.752 [INFO][3689] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948" host="localhost" Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.760 [INFO][3689] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948 Apr 17 23:31:11.872355 containerd[1455]: 2026-04-17 23:31:11.776 [INFO][3689] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948" host="localhost" Apr 17 23:31:11.874245 containerd[1455]: 2026-04-17 23:31:11.810 [INFO][3689] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.128/26] block=192.168.88.128/26 handle="k8s-pod-network.90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948" host="localhost" Apr 17 23:31:11.874245 containerd[1455]: 2026-04-17 23:31:11.811 [INFO][3689] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.128/26] handle="k8s-pod-network.90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948" host="localhost" Apr 17 23:31:11.874245 containerd[1455]: 2026-04-17 23:31:11.811 [INFO][3689] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:31:11.874245 containerd[1455]: 2026-04-17 23:31:11.811 [INFO][3689] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.128/26] IPv6=[] ContainerID="90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948" HandleID="k8s-pod-network.90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948" Workload="localhost-k8s-calico--kube--controllers--857576c76b--6b74d-eth0" Apr 17 23:31:11.874245 containerd[1455]: 2026-04-17 23:31:11.816 [INFO][3508] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948" Namespace="calico-system" Pod="calico-kube-controllers-857576c76b-6b74d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--857576c76b--6b74d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--857576c76b--6b74d-eth0", GenerateName:"calico-kube-controllers-857576c76b-", Namespace:"calico-system", SelfLink:"", UID:"3b7da14d-a2f0-4dfd-9ad4-fcbdebe97c4f", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"857576c76b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-857576c76b-6b74d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicedac9e39d5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:31:11.874245 containerd[1455]: 2026-04-17 23:31:11.818 [INFO][3508] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.128/32] ContainerID="90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948" Namespace="calico-system" Pod="calico-kube-controllers-857576c76b-6b74d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--857576c76b--6b74d-eth0" Apr 17 23:31:11.874245 containerd[1455]: 2026-04-17 23:31:11.818 [INFO][3508] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicedac9e39d5 ContainerID="90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948" Namespace="calico-system" Pod="calico-kube-controllers-857576c76b-6b74d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--857576c76b--6b74d-eth0" Apr 17 23:31:11.874245 containerd[1455]: 2026-04-17 23:31:11.841 [INFO][3508] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948" Namespace="calico-system" Pod="calico-kube-controllers-857576c76b-6b74d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--857576c76b--6b74d-eth0" Apr 17 23:31:11.874245 containerd[1455]: 2026-04-17 23:31:11.842 [INFO][3508] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948" Namespace="calico-system" Pod="calico-kube-controllers-857576c76b-6b74d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--857576c76b--6b74d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--857576c76b--6b74d-eth0", GenerateName:"calico-kube-controllers-857576c76b-", Namespace:"calico-system", SelfLink:"", UID:"3b7da14d-a2f0-4dfd-9ad4-fcbdebe97c4f", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"857576c76b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948", Pod:"calico-kube-controllers-857576c76b-6b74d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.128/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calicedac9e39d5", MAC:"fa:61:d3:ff:ee:d6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:31:11.874245 containerd[1455]: 2026-04-17 23:31:11.862 [INFO][3508] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948" Namespace="calico-system" Pod="calico-kube-controllers-857576c76b-6b74d" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--857576c76b--6b74d-eth0" Apr 17 23:31:11.880231 containerd[1455]: 2026-04-17 23:31:11.432 [INFO][3641] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ab3a6a4c0bbf304fd20310b66fd861c851fdce70b0697f5246c4996df5f59971" Apr 17 23:31:11.880231 containerd[1455]: 2026-04-17 23:31:11.432 [INFO][3641] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ab3a6a4c0bbf304fd20310b66fd861c851fdce70b0697f5246c4996df5f59971" iface="eth0" netns="/var/run/netns/cni-8047de6b-9d68-1d67-dbf5-1753bdc10a43" Apr 17 23:31:11.880231 containerd[1455]: 2026-04-17 23:31:11.434 [INFO][3641] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ab3a6a4c0bbf304fd20310b66fd861c851fdce70b0697f5246c4996df5f59971" iface="eth0" netns="/var/run/netns/cni-8047de6b-9d68-1d67-dbf5-1753bdc10a43" Apr 17 23:31:11.880231 containerd[1455]: 2026-04-17 23:31:11.442 [INFO][3641] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ab3a6a4c0bbf304fd20310b66fd861c851fdce70b0697f5246c4996df5f59971" iface="eth0" netns="/var/run/netns/cni-8047de6b-9d68-1d67-dbf5-1753bdc10a43" Apr 17 23:31:11.880231 containerd[1455]: 2026-04-17 23:31:11.442 [INFO][3641] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ab3a6a4c0bbf304fd20310b66fd861c851fdce70b0697f5246c4996df5f59971" Apr 17 23:31:11.880231 containerd[1455]: 2026-04-17 23:31:11.459 [INFO][3641] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ab3a6a4c0bbf304fd20310b66fd861c851fdce70b0697f5246c4996df5f59971" Apr 17 23:31:11.880231 containerd[1455]: 2026-04-17 23:31:11.569 [INFO][3713] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ab3a6a4c0bbf304fd20310b66fd861c851fdce70b0697f5246c4996df5f59971" HandleID="k8s-pod-network.ab3a6a4c0bbf304fd20310b66fd861c851fdce70b0697f5246c4996df5f59971" Workload="localhost-k8s-coredns--7d764666f9--rnp9h-eth0" Apr 17 23:31:11.880231 containerd[1455]: 2026-04-17 23:31:11.571 [INFO][3713] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:31:11.880231 containerd[1455]: 2026-04-17 23:31:11.838 [INFO][3713] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:31:11.880231 containerd[1455]: 2026-04-17 23:31:11.858 [WARNING][3713] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ab3a6a4c0bbf304fd20310b66fd861c851fdce70b0697f5246c4996df5f59971" HandleID="k8s-pod-network.ab3a6a4c0bbf304fd20310b66fd861c851fdce70b0697f5246c4996df5f59971" Workload="localhost-k8s-coredns--7d764666f9--rnp9h-eth0" Apr 17 23:31:11.880231 containerd[1455]: 2026-04-17 23:31:11.860 [INFO][3713] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ab3a6a4c0bbf304fd20310b66fd861c851fdce70b0697f5246c4996df5f59971" HandleID="k8s-pod-network.ab3a6a4c0bbf304fd20310b66fd861c851fdce70b0697f5246c4996df5f59971" Workload="localhost-k8s-coredns--7d764666f9--rnp9h-eth0" Apr 17 23:31:11.880231 containerd[1455]: 2026-04-17 23:31:11.865 [INFO][3713] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:31:11.880231 containerd[1455]: 2026-04-17 23:31:11.873 [INFO][3641] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ab3a6a4c0bbf304fd20310b66fd861c851fdce70b0697f5246c4996df5f59971" Apr 17 23:31:11.885649 systemd[1]: run-netns-cni\x2d8047de6b\x2d9d68\x2d1d67\x2ddbf5\x2d1753bdc10a43.mount: Deactivated successfully. Apr 17 23:31:11.885813 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ab3a6a4c0bbf304fd20310b66fd861c851fdce70b0697f5246c4996df5f59971-shm.mount: Deactivated successfully. Apr 17 23:31:11.888356 containerd[1455]: time="2026-04-17T23:31:11.887961505Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-rnp9h,Uid:3c3c65d0-eb16-4067-b7da-ac4edc1b0e1b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ab3a6a4c0bbf304fd20310b66fd861c851fdce70b0697f5246c4996df5f59971\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:31:11.888420 kubelet[2507]: E0417 23:31:11.887144 2507 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ba86e583f41b164e73970a7887b9d528b140d6688b1acadc3c71de19a854c98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:31:11.888420 kubelet[2507]: E0417 23:31:11.887753 2507 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ba86e583f41b164e73970a7887b9d528b140d6688b1acadc3c71de19a854c98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5f84d7f489-kwq9q" Apr 17 23:31:11.888420 kubelet[2507]: E0417 23:31:11.887831 2507 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ba86e583f41b164e73970a7887b9d528b140d6688b1acadc3c71de19a854c98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5f84d7f489-kwq9q" Apr 17 23:31:11.888492 kubelet[2507]: E0417 23:31:11.888019 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f84d7f489-kwq9q_calico-system(3364d307-610e-4ec3-b6b8-9438cae6845a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f84d7f489-kwq9q_calico-system(3364d307-610e-4ec3-b6b8-9438cae6845a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ba86e583f41b164e73970a7887b9d528b140d6688b1acadc3c71de19a854c98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5f84d7f489-kwq9q" podUID="3364d307-610e-4ec3-b6b8-9438cae6845a" Apr 17 23:31:11.891154 kubelet[2507]: E0417 23:31:11.890940 2507 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab3a6a4c0bbf304fd20310b66fd861c851fdce70b0697f5246c4996df5f59971\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:31:11.891154 kubelet[2507]: E0417 23:31:11.890990 2507 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab3a6a4c0bbf304fd20310b66fd861c851fdce70b0697f5246c4996df5f59971\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-rnp9h" Apr 17 23:31:11.891154 kubelet[2507]: E0417 23:31:11.891009 2507 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab3a6a4c0bbf304fd20310b66fd861c851fdce70b0697f5246c4996df5f59971\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-rnp9h" Apr 17 23:31:11.891532 kubelet[2507]: E0417 23:31:11.891058 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-rnp9h_kube-system(3c3c65d0-eb16-4067-b7da-ac4edc1b0e1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-rnp9h_kube-system(3c3c65d0-eb16-4067-b7da-ac4edc1b0e1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab3a6a4c0bbf304fd20310b66fd861c851fdce70b0697f5246c4996df5f59971\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-rnp9h" podUID="3c3c65d0-eb16-4067-b7da-ac4edc1b0e1b" Apr 17 23:31:11.912245 containerd[1455]: 2026-04-17 23:31:11.429 [INFO][3615] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f20ff0b5c45ece82aefcca35a7d06baaabfe3d83d2fcfd1ebb8a0566387244cf" Apr 17 23:31:11.912245 containerd[1455]: 2026-04-17 23:31:11.438 [INFO][3615] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f20ff0b5c45ece82aefcca35a7d06baaabfe3d83d2fcfd1ebb8a0566387244cf" iface="eth0" netns="/var/run/netns/cni-bb65a16a-aa52-e261-b174-e17ebf5aa594" Apr 17 23:31:11.912245 containerd[1455]: 2026-04-17 23:31:11.459 [INFO][3615] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f20ff0b5c45ece82aefcca35a7d06baaabfe3d83d2fcfd1ebb8a0566387244cf" iface="eth0" netns="/var/run/netns/cni-bb65a16a-aa52-e261-b174-e17ebf5aa594" Apr 17 23:31:11.912245 containerd[1455]: 2026-04-17 23:31:11.467 [INFO][3615] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f20ff0b5c45ece82aefcca35a7d06baaabfe3d83d2fcfd1ebb8a0566387244cf" iface="eth0" netns="/var/run/netns/cni-bb65a16a-aa52-e261-b174-e17ebf5aa594" Apr 17 23:31:11.912245 containerd[1455]: 2026-04-17 23:31:11.467 [INFO][3615] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f20ff0b5c45ece82aefcca35a7d06baaabfe3d83d2fcfd1ebb8a0566387244cf" Apr 17 23:31:11.912245 containerd[1455]: 2026-04-17 23:31:11.467 [INFO][3615] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f20ff0b5c45ece82aefcca35a7d06baaabfe3d83d2fcfd1ebb8a0566387244cf" Apr 17 23:31:11.912245 containerd[1455]: 2026-04-17 23:31:11.619 [INFO][3717] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f20ff0b5c45ece82aefcca35a7d06baaabfe3d83d2fcfd1ebb8a0566387244cf" HandleID="k8s-pod-network.f20ff0b5c45ece82aefcca35a7d06baaabfe3d83d2fcfd1ebb8a0566387244cf" Workload="localhost-k8s-coredns--7d764666f9--c5kzw-eth0" Apr 17 23:31:11.912245 containerd[1455]: 2026-04-17 23:31:11.619 [INFO][3717] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:31:11.912245 containerd[1455]: 2026-04-17 23:31:11.866 [INFO][3717] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:31:11.912245 containerd[1455]: 2026-04-17 23:31:11.894 [WARNING][3717] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f20ff0b5c45ece82aefcca35a7d06baaabfe3d83d2fcfd1ebb8a0566387244cf" HandleID="k8s-pod-network.f20ff0b5c45ece82aefcca35a7d06baaabfe3d83d2fcfd1ebb8a0566387244cf" Workload="localhost-k8s-coredns--7d764666f9--c5kzw-eth0" Apr 17 23:31:11.912245 containerd[1455]: 2026-04-17 23:31:11.894 [INFO][3717] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f20ff0b5c45ece82aefcca35a7d06baaabfe3d83d2fcfd1ebb8a0566387244cf" HandleID="k8s-pod-network.f20ff0b5c45ece82aefcca35a7d06baaabfe3d83d2fcfd1ebb8a0566387244cf" Workload="localhost-k8s-coredns--7d764666f9--c5kzw-eth0" Apr 17 23:31:11.912245 containerd[1455]: 2026-04-17 23:31:11.898 [INFO][3717] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:31:11.912245 containerd[1455]: 2026-04-17 23:31:11.905 [INFO][3615] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f20ff0b5c45ece82aefcca35a7d06baaabfe3d83d2fcfd1ebb8a0566387244cf" Apr 17 23:31:11.919027 containerd[1455]: time="2026-04-17T23:31:11.918991965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c5kzw,Uid:82e80704-4770-4915-a706-1207ff1fb60b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f20ff0b5c45ece82aefcca35a7d06baaabfe3d83d2fcfd1ebb8a0566387244cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:31:11.919901 kubelet[2507]: E0417 23:31:11.919843 2507 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f20ff0b5c45ece82aefcca35a7d06baaabfe3d83d2fcfd1ebb8a0566387244cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:31:11.920482 kubelet[2507]: E0417 23:31:11.920057 2507 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f20ff0b5c45ece82aefcca35a7d06baaabfe3d83d2fcfd1ebb8a0566387244cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-c5kzw" Apr 17 23:31:11.920482 kubelet[2507]: E0417 23:31:11.920076 2507 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f20ff0b5c45ece82aefcca35a7d06baaabfe3d83d2fcfd1ebb8a0566387244cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-c5kzw" Apr 17 23:31:11.920482 kubelet[2507]: E0417 23:31:11.920128 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-c5kzw_kube-system(82e80704-4770-4915-a706-1207ff1fb60b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-c5kzw_kube-system(82e80704-4770-4915-a706-1207ff1fb60b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f20ff0b5c45ece82aefcca35a7d06baaabfe3d83d2fcfd1ebb8a0566387244cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-c5kzw" podUID="82e80704-4770-4915-a706-1207ff1fb60b" Apr 17 23:31:11.932038 containerd[1455]: 2026-04-17 23:31:11.382 [INFO][3609] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f59131f2c935a3ee7be7adea8ad796ec5ae354022d3b3f578719739a729c3769" Apr 17 23:31:11.932038 containerd[1455]: 2026-04-17 23:31:11.383 [INFO][3609] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f59131f2c935a3ee7be7adea8ad796ec5ae354022d3b3f578719739a729c3769" iface="eth0" netns="/var/run/netns/cni-c2cfca93-bb57-b744-1516-46a75116402f" Apr 17 23:31:11.932038 containerd[1455]: 2026-04-17 23:31:11.383 [INFO][3609] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f59131f2c935a3ee7be7adea8ad796ec5ae354022d3b3f578719739a729c3769" iface="eth0" netns="/var/run/netns/cni-c2cfca93-bb57-b744-1516-46a75116402f" Apr 17 23:31:11.932038 containerd[1455]: 2026-04-17 23:31:11.385 [INFO][3609] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f59131f2c935a3ee7be7adea8ad796ec5ae354022d3b3f578719739a729c3769" iface="eth0" netns="/var/run/netns/cni-c2cfca93-bb57-b744-1516-46a75116402f" Apr 17 23:31:11.932038 containerd[1455]: 2026-04-17 23:31:11.385 [INFO][3609] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f59131f2c935a3ee7be7adea8ad796ec5ae354022d3b3f578719739a729c3769" Apr 17 23:31:11.932038 containerd[1455]: 2026-04-17 23:31:11.385 [INFO][3609] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f59131f2c935a3ee7be7adea8ad796ec5ae354022d3b3f578719739a729c3769" Apr 17 23:31:11.932038 containerd[1455]: 2026-04-17 23:31:11.601 [INFO][3695] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f59131f2c935a3ee7be7adea8ad796ec5ae354022d3b3f578719739a729c3769" HandleID="k8s-pod-network.f59131f2c935a3ee7be7adea8ad796ec5ae354022d3b3f578719739a729c3769" Workload="localhost-k8s-goldmane--9f7667bb8--6m64d-eth0" Apr 17 23:31:11.932038 containerd[1455]: 2026-04-17 23:31:11.636 [INFO][3695] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:31:11.932038 containerd[1455]: 2026-04-17 23:31:11.898 [INFO][3695] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:31:11.932038 containerd[1455]: 2026-04-17 23:31:11.921 [WARNING][3695] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f59131f2c935a3ee7be7adea8ad796ec5ae354022d3b3f578719739a729c3769" HandleID="k8s-pod-network.f59131f2c935a3ee7be7adea8ad796ec5ae354022d3b3f578719739a729c3769" Workload="localhost-k8s-goldmane--9f7667bb8--6m64d-eth0" Apr 17 23:31:11.932038 containerd[1455]: 2026-04-17 23:31:11.921 [INFO][3695] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f59131f2c935a3ee7be7adea8ad796ec5ae354022d3b3f578719739a729c3769" HandleID="k8s-pod-network.f59131f2c935a3ee7be7adea8ad796ec5ae354022d3b3f578719739a729c3769" Workload="localhost-k8s-goldmane--9f7667bb8--6m64d-eth0" Apr 17 23:31:11.932038 containerd[1455]: 2026-04-17 23:31:11.925 [INFO][3695] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:31:11.932038 containerd[1455]: 2026-04-17 23:31:11.927 [INFO][3609] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f59131f2c935a3ee7be7adea8ad796ec5ae354022d3b3f578719739a729c3769" Apr 17 23:31:11.947737 containerd[1455]: time="2026-04-17T23:31:11.947562429Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-6m64d,Uid:7293e6d6-a88e-43ea-b7ee-a4239f4cde4f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f59131f2c935a3ee7be7adea8ad796ec5ae354022d3b3f578719739a729c3769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:31:11.951823 kubelet[2507]: E0417 23:31:11.951237 2507 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f59131f2c935a3ee7be7adea8ad796ec5ae354022d3b3f578719739a729c3769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:31:11.951823 kubelet[2507]: E0417 23:31:11.951456 2507 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f59131f2c935a3ee7be7adea8ad796ec5ae354022d3b3f578719739a729c3769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-6m64d" Apr 17 23:31:11.951823 kubelet[2507]: E0417 23:31:11.951473 2507 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f59131f2c935a3ee7be7adea8ad796ec5ae354022d3b3f578719739a729c3769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-6m64d" Apr 17 23:31:11.960533 kubelet[2507]: E0417 23:31:11.956686 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-6m64d_calico-system(7293e6d6-a88e-43ea-b7ee-a4239f4cde4f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-6m64d_calico-system(7293e6d6-a88e-43ea-b7ee-a4239f4cde4f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f59131f2c935a3ee7be7adea8ad796ec5ae354022d3b3f578719739a729c3769\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-6m64d" podUID="7293e6d6-a88e-43ea-b7ee-a4239f4cde4f" Apr 17 23:31:11.982040 containerd[1455]: time="2026-04-17T23:31:11.981274095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:31:11.982040 containerd[1455]: time="2026-04-17T23:31:11.981477809Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:31:11.982040 containerd[1455]: time="2026-04-17T23:31:11.981487924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:31:11.982040 containerd[1455]: time="2026-04-17T23:31:11.981584270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:31:11.982728 containerd[1455]: 2026-04-17 23:31:11.416 [INFO][3664] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="80681febe8ba934047c98097f293d6358f550cf5fd1bd4d0ca2dbb468ee37044" Apr 17 23:31:11.982728 containerd[1455]: 2026-04-17 23:31:11.416 [INFO][3664] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="80681febe8ba934047c98097f293d6358f550cf5fd1bd4d0ca2dbb468ee37044" iface="eth0" netns="/var/run/netns/cni-54c8e9e1-7c8a-0a4a-e7b2-b0e4d92b405c" Apr 17 23:31:11.982728 containerd[1455]: 2026-04-17 23:31:11.417 [INFO][3664] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="80681febe8ba934047c98097f293d6358f550cf5fd1bd4d0ca2dbb468ee37044" iface="eth0" netns="/var/run/netns/cni-54c8e9e1-7c8a-0a4a-e7b2-b0e4d92b405c" Apr 17 23:31:11.982728 containerd[1455]: 2026-04-17 23:31:11.420 [INFO][3664] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="80681febe8ba934047c98097f293d6358f550cf5fd1bd4d0ca2dbb468ee37044" iface="eth0" netns="/var/run/netns/cni-54c8e9e1-7c8a-0a4a-e7b2-b0e4d92b405c" Apr 17 23:31:11.982728 containerd[1455]: 2026-04-17 23:31:11.420 [INFO][3664] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="80681febe8ba934047c98097f293d6358f550cf5fd1bd4d0ca2dbb468ee37044" Apr 17 23:31:11.982728 containerd[1455]: 2026-04-17 23:31:11.420 [INFO][3664] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="80681febe8ba934047c98097f293d6358f550cf5fd1bd4d0ca2dbb468ee37044" Apr 17 23:31:11.982728 containerd[1455]: 2026-04-17 23:31:11.654 [INFO][3709] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="80681febe8ba934047c98097f293d6358f550cf5fd1bd4d0ca2dbb468ee37044" HandleID="k8s-pod-network.80681febe8ba934047c98097f293d6358f550cf5fd1bd4d0ca2dbb468ee37044" Workload="localhost-k8s-calico--apiserver--5f84d7f489--rm5t6-eth0" Apr 17 23:31:11.982728 containerd[1455]: 2026-04-17 23:31:11.654 [INFO][3709] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:31:11.982728 containerd[1455]: 2026-04-17 23:31:11.925 [INFO][3709] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:31:11.982728 containerd[1455]: 2026-04-17 23:31:11.962 [WARNING][3709] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="80681febe8ba934047c98097f293d6358f550cf5fd1bd4d0ca2dbb468ee37044" HandleID="k8s-pod-network.80681febe8ba934047c98097f293d6358f550cf5fd1bd4d0ca2dbb468ee37044" Workload="localhost-k8s-calico--apiserver--5f84d7f489--rm5t6-eth0" Apr 17 23:31:11.982728 containerd[1455]: 2026-04-17 23:31:11.962 [INFO][3709] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="80681febe8ba934047c98097f293d6358f550cf5fd1bd4d0ca2dbb468ee37044" HandleID="k8s-pod-network.80681febe8ba934047c98097f293d6358f550cf5fd1bd4d0ca2dbb468ee37044" Workload="localhost-k8s-calico--apiserver--5f84d7f489--rm5t6-eth0" Apr 17 23:31:11.982728 containerd[1455]: 2026-04-17 23:31:11.972 [INFO][3709] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:31:11.982728 containerd[1455]: 2026-04-17 23:31:11.978 [INFO][3664] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="80681febe8ba934047c98097f293d6358f550cf5fd1bd4d0ca2dbb468ee37044" Apr 17 23:31:11.995189 containerd[1455]: time="2026-04-17T23:31:11.994757590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f84d7f489-rm5t6,Uid:cd34f5a7-9bf4-4007-8c3f-3be1f839a4eb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"80681febe8ba934047c98097f293d6358f550cf5fd1bd4d0ca2dbb468ee37044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:31:12.000892 kubelet[2507]: E0417 23:31:11.997782 2507 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80681febe8ba934047c98097f293d6358f550cf5fd1bd4d0ca2dbb468ee37044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:31:12.000892 kubelet[2507]: E0417 23:31:11.998033 2507 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80681febe8ba934047c98097f293d6358f550cf5fd1bd4d0ca2dbb468ee37044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5f84d7f489-rm5t6" Apr 17 23:31:12.000892 kubelet[2507]: E0417 23:31:11.998064 2507 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80681febe8ba934047c98097f293d6358f550cf5fd1bd4d0ca2dbb468ee37044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-5f84d7f489-rm5t6" Apr 17 23:31:12.001190 kubelet[2507]: E0417 23:31:11.998565 2507 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5f84d7f489-rm5t6_calico-system(cd34f5a7-9bf4-4007-8c3f-3be1f839a4eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5f84d7f489-rm5t6_calico-system(cd34f5a7-9bf4-4007-8c3f-3be1f839a4eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80681febe8ba934047c98097f293d6358f550cf5fd1bd4d0ca2dbb468ee37044\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-5f84d7f489-rm5t6" podUID="cd34f5a7-9bf4-4007-8c3f-3be1f839a4eb" Apr 17 23:31:12.010560 systemd[1]: Started cri-containerd-90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948.scope - libcontainer container 90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948. Apr 17 23:31:12.015871 containerd[1455]: 2026-04-17 23:31:11.469 [INFO][3617] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9f61a56cecf78ec69105abef29806a15eb22d7829ac03effbd440aacc3b8b8cd" Apr 17 23:31:12.015871 containerd[1455]: 2026-04-17 23:31:11.471 [INFO][3617] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f61a56cecf78ec69105abef29806a15eb22d7829ac03effbd440aacc3b8b8cd" iface="eth0" netns="/var/run/netns/cni-5430bf58-fb3c-53ff-8b59-748dc8f166ad" Apr 17 23:31:12.015871 containerd[1455]: 2026-04-17 23:31:11.471 [INFO][3617] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f61a56cecf78ec69105abef29806a15eb22d7829ac03effbd440aacc3b8b8cd" iface="eth0" netns="/var/run/netns/cni-5430bf58-fb3c-53ff-8b59-748dc8f166ad" Apr 17 23:31:12.015871 containerd[1455]: 2026-04-17 23:31:11.475 [INFO][3617] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9f61a56cecf78ec69105abef29806a15eb22d7829ac03effbd440aacc3b8b8cd" iface="eth0" netns="/var/run/netns/cni-5430bf58-fb3c-53ff-8b59-748dc8f166ad" Apr 17 23:31:12.015871 containerd[1455]: 2026-04-17 23:31:11.475 [INFO][3617] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9f61a56cecf78ec69105abef29806a15eb22d7829ac03effbd440aacc3b8b8cd" Apr 17 23:31:12.015871 containerd[1455]: 2026-04-17 23:31:11.475 [INFO][3617] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9f61a56cecf78ec69105abef29806a15eb22d7829ac03effbd440aacc3b8b8cd" Apr 17 23:31:12.015871 containerd[1455]: 2026-04-17 23:31:11.676 [INFO][3721] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9f61a56cecf78ec69105abef29806a15eb22d7829ac03effbd440aacc3b8b8cd" HandleID="k8s-pod-network.9f61a56cecf78ec69105abef29806a15eb22d7829ac03effbd440aacc3b8b8cd" Workload="localhost-k8s-whisker--7f79f9c6d--x769v-eth0" Apr 17 23:31:12.015871 containerd[1455]: 2026-04-17 23:31:11.676 [INFO][3721] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:31:12.015871 containerd[1455]: 2026-04-17 23:31:11.973 [INFO][3721] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:31:12.015871 containerd[1455]: 2026-04-17 23:31:11.995 [WARNING][3721] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9f61a56cecf78ec69105abef29806a15eb22d7829ac03effbd440aacc3b8b8cd" HandleID="k8s-pod-network.9f61a56cecf78ec69105abef29806a15eb22d7829ac03effbd440aacc3b8b8cd" Workload="localhost-k8s-whisker--7f79f9c6d--x769v-eth0" Apr 17 23:31:12.015871 containerd[1455]: 2026-04-17 23:31:11.995 [INFO][3721] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9f61a56cecf78ec69105abef29806a15eb22d7829ac03effbd440aacc3b8b8cd" HandleID="k8s-pod-network.9f61a56cecf78ec69105abef29806a15eb22d7829ac03effbd440aacc3b8b8cd" Workload="localhost-k8s-whisker--7f79f9c6d--x769v-eth0" Apr 17 23:31:12.015871 containerd[1455]: 2026-04-17 23:31:12.004 [INFO][3721] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:31:12.015871 containerd[1455]: 2026-04-17 23:31:12.008 [INFO][3617] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9f61a56cecf78ec69105abef29806a15eb22d7829ac03effbd440aacc3b8b8cd" Apr 17 23:31:12.024212 containerd[1455]: time="2026-04-17T23:31:12.024061486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f79f9c6d-x769v,Uid:6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9f61a56cecf78ec69105abef29806a15eb22d7829ac03effbd440aacc3b8b8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:31:12.024721 kubelet[2507]: E0417 23:31:12.024544 2507 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f61a56cecf78ec69105abef29806a15eb22d7829ac03effbd440aacc3b8b8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:31:12.024721 kubelet[2507]: E0417 23:31:12.024631 2507 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f61a56cecf78ec69105abef29806a15eb22d7829ac03effbd440aacc3b8b8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7f79f9c6d-x769v" Apr 17 23:31:12.039663 systemd-resolved[1382]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:31:12.082520 systemd-networkd[1380]: cali968478fa0c6: Link UP Apr 17 23:31:12.083608 systemd-networkd[1380]: cali968478fa0c6: Gained carrier Apr 17 23:31:12.084123 containerd[1455]: time="2026-04-17T23:31:12.082923651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-857576c76b-6b74d,Uid:3b7da14d-a2f0-4dfd-9ad4-fcbdebe97c4f,Namespace:calico-system,Attempt:0,} returns sandbox id \"90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948\"" Apr 17 23:31:12.088649 containerd[1455]: time="2026-04-17T23:31:12.088630101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:11.706 [ERROR][3746] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:11.735 [INFO][3746] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--ktqpr-eth0 csi-node-driver- calico-system 40dcf753-14d8-4454-adb8-95d51e1d49d9 749 0 2026-04-17 23:30:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-ktqpr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali968478fa0c6 [] [] }} ContainerID="ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a" Namespace="calico-system" Pod="csi-node-driver-ktqpr" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktqpr-" Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:11.735 [INFO][3746] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a" Namespace="calico-system" Pod="csi-node-driver-ktqpr" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktqpr-eth0" Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:11.802 [INFO][3789] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a" HandleID="k8s-pod-network.ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a" Workload="localhost-k8s-csi--node--driver--ktqpr-eth0" Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:11.818 [INFO][3789] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a" HandleID="k8s-pod-network.ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a" Workload="localhost-k8s-csi--node--driver--ktqpr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb90), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-ktqpr", "timestamp":"2026-04-17 23:31:11.802529168 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00058fa20)} Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:11.819 [INFO][3789] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:12.005 [INFO][3789] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:12.005 [INFO][3789] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:12.010 [INFO][3789] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a" host="localhost" Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:12.016 [INFO][3789] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:12.032 [INFO][3789] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:12.037 [INFO][3789] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:12.042 [INFO][3789] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:12.042 [INFO][3789] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a" host="localhost" Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:12.046 [INFO][3789] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:12.059 [INFO][3789] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a" host="localhost" Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:12.069 [INFO][3789] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a" host="localhost" Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:12.069 [INFO][3789] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a" host="localhost" Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:12.069 [INFO][3789] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:31:12.118416 containerd[1455]: 2026-04-17 23:31:12.069 [INFO][3789] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a" HandleID="k8s-pod-network.ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a" Workload="localhost-k8s-csi--node--driver--ktqpr-eth0" Apr 17 23:31:12.118975 containerd[1455]: 2026-04-17 23:31:12.074 [INFO][3746] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a" Namespace="calico-system" Pod="csi-node-driver-ktqpr" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktqpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ktqpr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40dcf753-14d8-4454-adb8-95d51e1d49d9", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-ktqpr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali968478fa0c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:31:12.118975 containerd[1455]: 2026-04-17 23:31:12.075 [INFO][3746] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a" Namespace="calico-system" Pod="csi-node-driver-ktqpr" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktqpr-eth0" Apr 17 23:31:12.118975 containerd[1455]: 2026-04-17 23:31:12.075 [INFO][3746] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali968478fa0c6 ContainerID="ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a" Namespace="calico-system" Pod="csi-node-driver-ktqpr" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktqpr-eth0" Apr 17 23:31:12.118975 containerd[1455]: 2026-04-17 23:31:12.084 [INFO][3746] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a" Namespace="calico-system" Pod="csi-node-driver-ktqpr" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktqpr-eth0" Apr 17 23:31:12.118975 containerd[1455]: 2026-04-17 23:31:12.085 [INFO][3746] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a" Namespace="calico-system" Pod="csi-node-driver-ktqpr" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktqpr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--ktqpr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"40dcf753-14d8-4454-adb8-95d51e1d49d9", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 30, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a", Pod:"csi-node-driver-ktqpr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali968478fa0c6", MAC:"42:43:80:4d:0c:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:31:12.118975 containerd[1455]: 2026-04-17 23:31:12.112 [INFO][3746] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a" Namespace="calico-system" Pod="csi-node-driver-ktqpr" WorkloadEndpoint="localhost-k8s-csi--node--driver--ktqpr-eth0" Apr 17 23:31:12.168005 containerd[1455]: time="2026-04-17T23:31:12.167056265Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:31:12.169377 containerd[1455]: time="2026-04-17T23:31:12.167772558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:31:12.169377 containerd[1455]: time="2026-04-17T23:31:12.168233576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:31:12.170705 containerd[1455]: time="2026-04-17T23:31:12.170267237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:31:12.194719 kubelet[2507]: I0417 23:31:12.194582 2507 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:31:12.195263 kubelet[2507]: E0417 23:31:12.195126 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:12.202616 systemd[1]: Started cri-containerd-ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a.scope - libcontainer container ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a. Apr 17 23:31:12.222181 systemd-resolved[1382]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:31:12.245031 containerd[1455]: time="2026-04-17T23:31:12.245002246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-ktqpr,Uid:40dcf753-14d8-4454-adb8-95d51e1d49d9,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a\"" Apr 17 23:31:12.639089 kubelet[2507]: E0417 23:31:12.638475 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:12.644543 kubelet[2507]: E0417 23:31:12.644424 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:12.645482 containerd[1455]: time="2026-04-17T23:31:12.645442099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-rnp9h,Uid:3c3c65d0-eb16-4067-b7da-ac4edc1b0e1b,Namespace:kube-system,Attempt:0,}" Apr 17 23:31:12.649874 containerd[1455]: time="2026-04-17T23:31:12.649850914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f84d7f489-rm5t6,Uid:cd34f5a7-9bf4-4007-8c3f-3be1f839a4eb,Namespace:calico-system,Attempt:0,}" Apr 17 23:31:12.662046 containerd[1455]: time="2026-04-17T23:31:12.661284001Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f84d7f489-kwq9q,Uid:3364d307-610e-4ec3-b6b8-9438cae6845a,Namespace:calico-system,Attempt:0,}" Apr 17 23:31:12.665093 kubelet[2507]: E0417 23:31:12.664727 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:12.666086 containerd[1455]: time="2026-04-17T23:31:12.665788243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c5kzw,Uid:82e80704-4770-4915-a706-1207ff1fb60b,Namespace:kube-system,Attempt:0,}" Apr 17 23:31:12.668540 containerd[1455]: time="2026-04-17T23:31:12.668495721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-6m64d,Uid:7293e6d6-a88e-43ea-b7ee-a4239f4cde4f,Namespace:calico-system,Attempt:0,}" Apr 17 23:31:12.701084 kubelet[2507]: I0417 23:31:12.699711 2507 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-whisker-backend-key-pair\") pod \"6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4\" (UID: \"6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4\") " Apr 17 23:31:12.701084 kubelet[2507]: I0417 23:31:12.699760 2507 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-whisker-ca-bundle\") pod \"6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4\" (UID: \"6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4\") " Apr 17 23:31:12.701084 kubelet[2507]: I0417 23:31:12.699780 2507 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-nginx-config\" (UniqueName: \"kubernetes.io/configmap/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-nginx-config\") pod \"6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4\" (UID: \"6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4\") " Apr 17 23:31:12.701084 kubelet[2507]: I0417 23:31:12.699797 2507 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-kube-api-access-t6vbv\" (UniqueName: \"kubernetes.io/projected/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-kube-api-access-t6vbv\") pod \"6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4\" (UID: \"6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4\") " Apr 17 23:31:12.702946 kubelet[2507]: I0417 23:31:12.702910 2507 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-nginx-config" pod "6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4" (UID: "6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:31:12.705360 kubelet[2507]: I0417 23:31:12.703801 2507 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-whisker-ca-bundle" pod "6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4" (UID: "6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:31:12.713383 kubelet[2507]: I0417 23:31:12.709570 2507 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-kube-api-access-t6vbv" pod "6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4" (UID: "6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4"). InnerVolumeSpecName "kube-api-access-t6vbv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:31:12.714473 kubelet[2507]: I0417 23:31:12.714276 2507 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-whisker-backend-key-pair" pod "6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4" (UID: "6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:31:12.763630 systemd[1]: run-netns-cni\x2dc2cfca93\x2dbb57\x2db744\x2d1516\x2d46a75116402f.mount: Deactivated successfully. Apr 17 23:31:12.763711 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f59131f2c935a3ee7be7adea8ad796ec5ae354022d3b3f578719739a729c3769-shm.mount: Deactivated successfully. Apr 17 23:31:12.763762 systemd[1]: run-netns-cni\x2d5430bf58\x2dfb3c\x2d53ff\x2d8b59\x2d748dc8f166ad.mount: Deactivated successfully. Apr 17 23:31:12.763859 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9f61a56cecf78ec69105abef29806a15eb22d7829ac03effbd440aacc3b8b8cd-shm.mount: Deactivated successfully. Apr 17 23:31:12.763917 systemd[1]: run-netns-cni\x2d54c8e9e1\x2d7c8a\x2d0a4a\x2de7b2\x2db0e4d92b405c.mount: Deactivated successfully. Apr 17 23:31:12.763959 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-80681febe8ba934047c98097f293d6358f550cf5fd1bd4d0ca2dbb468ee37044-shm.mount: Deactivated successfully. Apr 17 23:31:12.764002 systemd[1]: run-netns-cni\x2dbb65a16a\x2daa52\x2de261\x2db174\x2de17ebf5aa594.mount: Deactivated successfully. Apr 17 23:31:12.764099 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f20ff0b5c45ece82aefcca35a7d06baaabfe3d83d2fcfd1ebb8a0566387244cf-shm.mount: Deactivated successfully. Apr 17 23:31:12.764178 systemd[1]: var-lib-kubelet-pods-6a5bfb7f\x2d45ca\x2d40c7\x2db4cf\x2db11fe52150a4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt6vbv.mount: Deactivated successfully. Apr 17 23:31:12.764228 systemd[1]: var-lib-kubelet-pods-6a5bfb7f\x2d45ca\x2d40c7\x2db4cf\x2db11fe52150a4-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:31:12.800773 kubelet[2507]: I0417 23:31:12.800700 2507 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:12.800773 kubelet[2507]: I0417 23:31:12.800767 2507 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:12.800773 kubelet[2507]: I0417 23:31:12.800775 2507 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:12.800773 kubelet[2507]: I0417 23:31:12.800780 2507 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t6vbv\" (UniqueName: \"kubernetes.io/projected/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4-kube-api-access-t6vbv\") on node \"localhost\" DevicePath \"\"" Apr 17 23:31:12.941744 systemd-networkd[1380]: calic954f3a874b: Link UP Apr 17 23:31:12.942017 systemd-networkd[1380]: calic954f3a874b: Gained carrier Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.736 [ERROR][3939] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.756 [INFO][3939] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f84d7f489--rm5t6-eth0 calico-apiserver-5f84d7f489- calico-system cd34f5a7-9bf4-4007-8c3f-3be1f839a4eb 974 0 2026-04-17 23:30:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f84d7f489 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f84d7f489-rm5t6 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calic954f3a874b [] [] }} ContainerID="8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8" Namespace="calico-system" Pod="calico-apiserver-5f84d7f489-rm5t6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84d7f489--rm5t6-" Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.757 [INFO][3939] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8" Namespace="calico-system" Pod="calico-apiserver-5f84d7f489-rm5t6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84d7f489--rm5t6-eth0" Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.825 [INFO][4003] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8" HandleID="k8s-pod-network.8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8" Workload="localhost-k8s-calico--apiserver--5f84d7f489--rm5t6-eth0" Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.835 [INFO][4003] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8" HandleID="k8s-pod-network.8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8" Workload="localhost-k8s-calico--apiserver--5f84d7f489--rm5t6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ee160), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5f84d7f489-rm5t6", "timestamp":"2026-04-17 23:31:12.825084832 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0007c6000)} Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.835 [INFO][4003] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.835 [INFO][4003] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.836 [INFO][4003] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.839 [INFO][4003] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8" host="localhost" Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.851 [INFO][4003] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.859 [INFO][4003] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.861 [INFO][4003] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.863 [INFO][4003] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.864 [INFO][4003] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8" host="localhost" Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.865 [INFO][4003] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8 Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.872 [INFO][4003] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8" host="localhost" Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.879 [INFO][4003] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8" host="localhost" Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.879 [INFO][4003] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8" host="localhost" Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.880 [INFO][4003] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:31:12.978277 containerd[1455]: 2026-04-17 23:31:12.880 [INFO][4003] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8" HandleID="k8s-pod-network.8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8" Workload="localhost-k8s-calico--apiserver--5f84d7f489--rm5t6-eth0" Apr 17 23:31:12.980094 containerd[1455]: 2026-04-17 23:31:12.938 [INFO][3939] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8" Namespace="calico-system" Pod="calico-apiserver-5f84d7f489-rm5t6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84d7f489--rm5t6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f84d7f489--rm5t6-eth0", GenerateName:"calico-apiserver-5f84d7f489-", Namespace:"calico-system", SelfLink:"", UID:"cd34f5a7-9bf4-4007-8c3f-3be1f839a4eb", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 30, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f84d7f489", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f84d7f489-rm5t6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic954f3a874b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:31:12.980094 containerd[1455]: 2026-04-17 23:31:12.939 [INFO][3939] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8" Namespace="calico-system" Pod="calico-apiserver-5f84d7f489-rm5t6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84d7f489--rm5t6-eth0" Apr 17 23:31:12.980094 containerd[1455]: 2026-04-17 23:31:12.939 [INFO][3939] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic954f3a874b ContainerID="8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8" Namespace="calico-system" Pod="calico-apiserver-5f84d7f489-rm5t6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84d7f489--rm5t6-eth0" Apr 17 23:31:12.980094 containerd[1455]: 2026-04-17 23:31:12.954 [INFO][3939] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8" Namespace="calico-system" Pod="calico-apiserver-5f84d7f489-rm5t6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84d7f489--rm5t6-eth0" Apr 17 23:31:12.980094 containerd[1455]: 2026-04-17 23:31:12.963 [INFO][3939] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8" Namespace="calico-system" Pod="calico-apiserver-5f84d7f489-rm5t6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84d7f489--rm5t6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f84d7f489--rm5t6-eth0", GenerateName:"calico-apiserver-5f84d7f489-", Namespace:"calico-system", SelfLink:"", UID:"cd34f5a7-9bf4-4007-8c3f-3be1f839a4eb", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 30, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f84d7f489", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8", Pod:"calico-apiserver-5f84d7f489-rm5t6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic954f3a874b", MAC:"ce:49:46:e9:61:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:31:12.980094 containerd[1455]: 2026-04-17 23:31:12.973 [INFO][3939] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8" Namespace="calico-system" Pod="calico-apiserver-5f84d7f489-rm5t6" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84d7f489--rm5t6-eth0" Apr 17 23:31:13.023138 systemd-networkd[1380]: calic771fb31d88: Link UP Apr 17 23:31:13.025436 systemd-networkd[1380]: calic771fb31d88: Gained carrier Apr 17 23:31:13.031260 containerd[1455]: time="2026-04-17T23:31:13.031049976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:31:13.031260 containerd[1455]: time="2026-04-17T23:31:13.031113958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:31:13.031260 containerd[1455]: time="2026-04-17T23:31:13.031127115Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:31:13.032032 containerd[1455]: time="2026-04-17T23:31:13.031858762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:12.782 [ERROR][3960] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:12.807 [INFO][3960] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--c5kzw-eth0 coredns-7d764666f9- kube-system 82e80704-4770-4915-a706-1207ff1fb60b 976 0 2026-04-17 23:30:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-c5kzw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic771fb31d88 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818" Namespace="kube-system" Pod="coredns-7d764666f9-c5kzw" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c5kzw-" Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:12.807 [INFO][3960] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818" Namespace="kube-system" Pod="coredns-7d764666f9-c5kzw" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c5kzw-eth0" Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:12.866 [INFO][4026] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818" HandleID="k8s-pod-network.c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818" Workload="localhost-k8s-coredns--7d764666f9--c5kzw-eth0" Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:12.874 [INFO][4026] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818" HandleID="k8s-pod-network.c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818" Workload="localhost-k8s-coredns--7d764666f9--c5kzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003781c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-c5kzw", "timestamp":"2026-04-17 23:31:12.866250985 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000544580)} Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:12.874 [INFO][4026] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:12.880 [INFO][4026] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:12.880 [INFO][4026] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:12.954 [INFO][4026] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818" host="localhost" Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:12.965 [INFO][4026] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:12.973 [INFO][4026] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:12.976 [INFO][4026] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:12.981 [INFO][4026] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:12.981 [INFO][4026] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818" host="localhost" Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:12.986 [INFO][4026] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818 Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:13.003 [INFO][4026] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818" host="localhost" Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:13.014 [INFO][4026] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818" host="localhost" Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:13.014 [INFO][4026] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818" host="localhost" Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:13.014 [INFO][4026] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:31:13.045996 containerd[1455]: 2026-04-17 23:31:13.014 [INFO][4026] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818" HandleID="k8s-pod-network.c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818" Workload="localhost-k8s-coredns--7d764666f9--c5kzw-eth0" Apr 17 23:31:13.049632 containerd[1455]: 2026-04-17 23:31:13.019 [INFO][3960] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818" Namespace="kube-system" Pod="coredns-7d764666f9-c5kzw" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c5kzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--c5kzw-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"82e80704-4770-4915-a706-1207ff1fb60b", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-c5kzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic771fb31d88", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:31:13.049632 containerd[1455]: 2026-04-17 23:31:13.019 [INFO][3960] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818" Namespace="kube-system" Pod="coredns-7d764666f9-c5kzw" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c5kzw-eth0" Apr 17 23:31:13.049632 containerd[1455]: 2026-04-17 23:31:13.019 [INFO][3960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic771fb31d88 ContainerID="c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818" Namespace="kube-system" Pod="coredns-7d764666f9-c5kzw" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c5kzw-eth0" Apr 17 23:31:13.049632 containerd[1455]: 2026-04-17 23:31:13.025 [INFO][3960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818" Namespace="kube-system" Pod="coredns-7d764666f9-c5kzw" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c5kzw-eth0" Apr 17 23:31:13.049632 containerd[1455]: 2026-04-17 23:31:13.026 [INFO][3960] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818" Namespace="kube-system" Pod="coredns-7d764666f9-c5kzw" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c5kzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--c5kzw-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"82e80704-4770-4915-a706-1207ff1fb60b", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818", Pod:"coredns-7d764666f9-c5kzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic771fb31d88", MAC:"b2:ea:cc:74:2e:73", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:31:13.049632 containerd[1455]: 2026-04-17 23:31:13.042 [INFO][3960] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818" Namespace="kube-system" Pod="coredns-7d764666f9-c5kzw" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--c5kzw-eth0" Apr 17 23:31:13.062550 systemd[1]: Started cri-containerd-8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8.scope - libcontainer container 8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8. Apr 17 23:31:13.108463 systemd-resolved[1382]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:31:13.143010 containerd[1455]: time="2026-04-17T23:31:13.138590567Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:31:13.143010 containerd[1455]: time="2026-04-17T23:31:13.139020417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:31:13.143010 containerd[1455]: time="2026-04-17T23:31:13.139166339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:31:13.143010 containerd[1455]: time="2026-04-17T23:31:13.139959937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:31:13.148026 systemd-networkd[1380]: calibf9e3c682ac: Link UP Apr 17 23:31:13.151490 systemd-networkd[1380]: calibf9e3c682ac: Gained carrier Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:12.759 [ERROR][3922] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:12.778 [INFO][3922] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7d764666f9--rnp9h-eth0 coredns-7d764666f9- kube-system 3c3c65d0-eb16-4067-b7da-ac4edc1b0e1b 975 0 2026-04-17 23:30:37 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7d764666f9-rnp9h eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibf9e3c682ac [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5" Namespace="kube-system" Pod="coredns-7d764666f9-rnp9h" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--rnp9h-" Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:12.778 [INFO][3922] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5" Namespace="kube-system" Pod="coredns-7d764666f9-rnp9h" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--rnp9h-eth0" Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:12.866 [INFO][4014] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5" HandleID="k8s-pod-network.3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5" Workload="localhost-k8s-coredns--7d764666f9--rnp9h-eth0" Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:12.881 [INFO][4014] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5" HandleID="k8s-pod-network.3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5" Workload="localhost-k8s-coredns--7d764666f9--rnp9h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7d764666f9-rnp9h", "timestamp":"2026-04-17 23:31:12.866772753 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005762c0)} Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:12.883 [INFO][4014] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:13.020 [INFO][4014] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:13.020 [INFO][4014] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:13.051 [INFO][4014] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5" host="localhost" Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:13.074 [INFO][4014] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:13.082 [INFO][4014] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:13.085 [INFO][4014] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:13.091 [INFO][4014] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:13.091 [INFO][4014] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5" host="localhost" Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:13.095 [INFO][4014] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5 Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:13.105 [INFO][4014] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5" host="localhost" Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:13.119 [INFO][4014] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5" host="localhost" Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:13.119 [INFO][4014] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5" host="localhost" Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:13.119 [INFO][4014] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:31:13.181606 containerd[1455]: 2026-04-17 23:31:13.119 [INFO][4014] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5" HandleID="k8s-pod-network.3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5" Workload="localhost-k8s-coredns--7d764666f9--rnp9h-eth0" Apr 17 23:31:13.182614 containerd[1455]: 2026-04-17 23:31:13.128 [INFO][3922] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5" Namespace="kube-system" Pod="coredns-7d764666f9-rnp9h" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--rnp9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--rnp9h-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"3c3c65d0-eb16-4067-b7da-ac4edc1b0e1b", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7d764666f9-rnp9h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf9e3c682ac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:31:13.182614 containerd[1455]: 2026-04-17 23:31:13.129 [INFO][3922] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5" Namespace="kube-system" Pod="coredns-7d764666f9-rnp9h" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--rnp9h-eth0" Apr 17 23:31:13.182614 containerd[1455]: 2026-04-17 23:31:13.129 [INFO][3922] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf9e3c682ac ContainerID="3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5" Namespace="kube-system" Pod="coredns-7d764666f9-rnp9h" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--rnp9h-eth0" Apr 17 23:31:13.182614 containerd[1455]: 2026-04-17 23:31:13.150 [INFO][3922] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5" Namespace="kube-system" Pod="coredns-7d764666f9-rnp9h" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--rnp9h-eth0" Apr 17 23:31:13.182614 containerd[1455]: 2026-04-17 23:31:13.152 [INFO][3922] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5" Namespace="kube-system" Pod="coredns-7d764666f9-rnp9h" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--rnp9h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7d764666f9--rnp9h-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"3c3c65d0-eb16-4067-b7da-ac4edc1b0e1b", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 30, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5", Pod:"coredns-7d764666f9-rnp9h", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibf9e3c682ac", MAC:"da:1c:eb:1f:dd:56", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:31:13.182614 containerd[1455]: 2026-04-17 23:31:13.169 [INFO][3922] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5" Namespace="kube-system" Pod="coredns-7d764666f9-rnp9h" WorkloadEndpoint="localhost-k8s-coredns--7d764666f9--rnp9h-eth0" Apr 17 23:31:13.193562 systemd[1]: Started cri-containerd-c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818.scope - libcontainer container c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818. Apr 17 23:31:13.199542 containerd[1455]: time="2026-04-17T23:31:13.198964290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f84d7f489-rm5t6,Uid:cd34f5a7-9bf4-4007-8c3f-3be1f839a4eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8\"" Apr 17 23:31:13.200998 systemd-networkd[1380]: calicedac9e39d5: Gained IPv6LL Apr 17 23:31:13.232849 systemd-resolved[1382]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:31:13.282624 systemd-networkd[1380]: cali3785399dd19: Link UP Apr 17 23:31:13.286545 systemd-networkd[1380]: cali3785399dd19: Gained carrier Apr 17 23:31:13.298200 containerd[1455]: time="2026-04-17T23:31:13.297274231Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:31:13.298200 containerd[1455]: time="2026-04-17T23:31:13.297710015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:31:13.298200 containerd[1455]: time="2026-04-17T23:31:13.297725318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:31:13.300017 containerd[1455]: time="2026-04-17T23:31:13.298761735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-c5kzw,Uid:82e80704-4770-4915-a706-1207ff1fb60b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818\"" Apr 17 23:31:13.302184 kubelet[2507]: E0417 23:31:13.301736 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:13.315155 containerd[1455]: time="2026-04-17T23:31:13.304843763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:31:13.327727 containerd[1455]: time="2026-04-17T23:31:13.327694248Z" level=info msg="CreateContainer within sandbox \"c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:12.758 [ERROR][3949] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:12.778 [INFO][3949] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5f84d7f489--kwq9q-eth0 calico-apiserver-5f84d7f489- calico-system 3364d307-610e-4ec3-b6b8-9438cae6845a 973 0 2026-04-17 23:30:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5f84d7f489 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5f84d7f489-kwq9q eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali3785399dd19 [] [] }} ContainerID="51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095" Namespace="calico-system" Pod="calico-apiserver-5f84d7f489-kwq9q" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84d7f489--kwq9q-" Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:12.778 [INFO][3949] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095" Namespace="calico-system" Pod="calico-apiserver-5f84d7f489-kwq9q" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84d7f489--kwq9q-eth0" Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:12.894 [INFO][4011] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095" HandleID="k8s-pod-network.51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095" Workload="localhost-k8s-calico--apiserver--5f84d7f489--kwq9q-eth0" Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:12.953 [INFO][4011] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095" HandleID="k8s-pod-network.51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095" Workload="localhost-k8s-calico--apiserver--5f84d7f489--kwq9q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004ea120), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-5f84d7f489-kwq9q", "timestamp":"2026-04-17 23:31:12.894195651 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0005fa580)} Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:12.953 [INFO][4011] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:13.120 [INFO][4011] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:13.120 [INFO][4011] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:13.159 [INFO][4011] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095" host="localhost" Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:13.176 [INFO][4011] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:13.198 [INFO][4011] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:13.202 [INFO][4011] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:13.211 [INFO][4011] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:13.211 [INFO][4011] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095" host="localhost" Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:13.213 [INFO][4011] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095 Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:13.220 [INFO][4011] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095" host="localhost" Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:13.244 [INFO][4011] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095" host="localhost" Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:13.244 [INFO][4011] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095" host="localhost" Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:13.244 [INFO][4011] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:31:13.330646 containerd[1455]: 2026-04-17 23:31:13.244 [INFO][4011] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095" HandleID="k8s-pod-network.51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095" Workload="localhost-k8s-calico--apiserver--5f84d7f489--kwq9q-eth0" Apr 17 23:31:13.331179 containerd[1455]: 2026-04-17 23:31:13.252 [INFO][3949] cni-plugin/k8s.go 418: Populated endpoint ContainerID="51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095" Namespace="calico-system" Pod="calico-apiserver-5f84d7f489-kwq9q" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84d7f489--kwq9q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f84d7f489--kwq9q-eth0", GenerateName:"calico-apiserver-5f84d7f489-", Namespace:"calico-system", SelfLink:"", UID:"3364d307-610e-4ec3-b6b8-9438cae6845a", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 30, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f84d7f489", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5f84d7f489-kwq9q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3785399dd19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:31:13.331179 containerd[1455]: 2026-04-17 23:31:13.252 [INFO][3949] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095" Namespace="calico-system" Pod="calico-apiserver-5f84d7f489-kwq9q" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84d7f489--kwq9q-eth0" Apr 17 23:31:13.331179 containerd[1455]: 2026-04-17 23:31:13.252 [INFO][3949] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3785399dd19 ContainerID="51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095" Namespace="calico-system" Pod="calico-apiserver-5f84d7f489-kwq9q" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84d7f489--kwq9q-eth0" Apr 17 23:31:13.331179 containerd[1455]: 2026-04-17 23:31:13.296 [INFO][3949] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095" Namespace="calico-system" Pod="calico-apiserver-5f84d7f489-kwq9q" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84d7f489--kwq9q-eth0" Apr 17 23:31:13.331179 containerd[1455]: 2026-04-17 23:31:13.300 [INFO][3949] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095" Namespace="calico-system" Pod="calico-apiserver-5f84d7f489-kwq9q" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84d7f489--kwq9q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5f84d7f489--kwq9q-eth0", GenerateName:"calico-apiserver-5f84d7f489-", Namespace:"calico-system", SelfLink:"", UID:"3364d307-610e-4ec3-b6b8-9438cae6845a", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 30, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5f84d7f489", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095", Pod:"calico-apiserver-5f84d7f489-kwq9q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali3785399dd19", MAC:"2a:2a:03:c7:0d:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:31:13.331179 containerd[1455]: 2026-04-17 23:31:13.326 [INFO][3949] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095" Namespace="calico-system" Pod="calico-apiserver-5f84d7f489-kwq9q" WorkloadEndpoint="localhost-k8s-calico--apiserver--5f84d7f489--kwq9q-eth0" Apr 17 23:31:13.369384 systemd[1]: Started cri-containerd-3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5.scope - libcontainer container 3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5. Apr 17 23:31:13.378515 containerd[1455]: time="2026-04-17T23:31:13.377821020Z" level=info msg="CreateContainer within sandbox \"c612977ef769db8bc4cde0940f05dc11ed987639dc39cf61e8830ffb95b80818\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"772851ebe7d48e5e1da3867604335a7a4a9ac32460e764073cdb5d0d02c42fee\"" Apr 17 23:31:13.378858 containerd[1455]: time="2026-04-17T23:31:13.378701793Z" level=info msg="StartContainer for \"772851ebe7d48e5e1da3867604335a7a4a9ac32460e764073cdb5d0d02c42fee\"" Apr 17 23:31:13.392842 systemd-resolved[1382]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:31:13.402542 containerd[1455]: time="2026-04-17T23:31:13.398823480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:31:13.402542 containerd[1455]: time="2026-04-17T23:31:13.400227834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:31:13.402542 containerd[1455]: time="2026-04-17T23:31:13.400237945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:31:13.402542 containerd[1455]: time="2026-04-17T23:31:13.401531465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:31:13.408194 systemd-networkd[1380]: cali2c47e5b1189: Link UP Apr 17 23:31:13.418611 systemd-networkd[1380]: cali2c47e5b1189: Gained carrier Apr 17 23:31:13.436280 systemd[1]: Removed slice kubepods-besteffort-pod6a5bfb7f_45ca_40c7_b4cf_b11fe52150a4.slice - libcontainer container kubepods-besteffort-pod6a5bfb7f_45ca_40c7_b4cf_b11fe52150a4.slice. Apr 17 23:31:13.453672 systemd[1]: Started cri-containerd-51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095.scope - libcontainer container 51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095. Apr 17 23:31:13.462917 containerd[1455]: time="2026-04-17T23:31:13.462610558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-rnp9h,Uid:3c3c65d0-eb16-4067-b7da-ac4edc1b0e1b,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5\"" Apr 17 23:31:13.466574 kubelet[2507]: E0417 23:31:13.466133 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:13.481114 containerd[1455]: time="2026-04-17T23:31:13.480112347Z" level=info msg="CreateContainer within sandbox \"3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:12.822 [ERROR][3965] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:12.843 [INFO][3965] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--9f7667bb8--6m64d-eth0 goldmane-9f7667bb8- calico-system 7293e6d6-a88e-43ea-b7ee-a4239f4cde4f 972 0 2026-04-17 23:30:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-9f7667bb8-6m64d eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2c47e5b1189 [] [] }} ContainerID="4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5" Namespace="calico-system" Pod="goldmane-9f7667bb8-6m64d" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--6m64d-" Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:12.843 [INFO][3965] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5" Namespace="calico-system" Pod="goldmane-9f7667bb8-6m64d" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--6m64d-eth0" Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:12.965 [INFO][4036] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5" HandleID="k8s-pod-network.4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5" Workload="localhost-k8s-goldmane--9f7667bb8--6m64d-eth0" Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:12.975 [INFO][4036] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5" HandleID="k8s-pod-network.4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5" Workload="localhost-k8s-goldmane--9f7667bb8--6m64d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139d80), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-9f7667bb8-6m64d", "timestamp":"2026-04-17 23:31:12.965429479 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00025d080)} Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:12.976 [INFO][4036] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:13.244 [INFO][4036] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:13.244 [INFO][4036] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:13.251 [INFO][4036] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5" host="localhost" Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:13.279 [INFO][4036] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:13.297 [INFO][4036] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:13.305 [INFO][4036] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:13.325 [INFO][4036] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:13.326 [INFO][4036] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5" host="localhost" Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:13.334 [INFO][4036] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5 Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:13.344 [INFO][4036] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5" host="localhost" Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:13.370 [INFO][4036] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5" host="localhost" Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:13.370 [INFO][4036] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5" host="localhost" Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:13.370 [INFO][4036] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:31:13.490050 containerd[1455]: 2026-04-17 23:31:13.370 [INFO][4036] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5" HandleID="k8s-pod-network.4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5" Workload="localhost-k8s-goldmane--9f7667bb8--6m64d-eth0" Apr 17 23:31:13.490974 containerd[1455]: 2026-04-17 23:31:13.396 [INFO][3965] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5" Namespace="calico-system" Pod="goldmane-9f7667bb8-6m64d" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--6m64d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--6m64d-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"7293e6d6-a88e-43ea-b7ee-a4239f4cde4f", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 30, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-9f7667bb8-6m64d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2c47e5b1189", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:31:13.490974 containerd[1455]: 2026-04-17 23:31:13.397 [INFO][3965] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5" Namespace="calico-system" Pod="goldmane-9f7667bb8-6m64d" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--6m64d-eth0" Apr 17 23:31:13.490974 containerd[1455]: 2026-04-17 23:31:13.397 [INFO][3965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c47e5b1189 ContainerID="4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5" Namespace="calico-system" Pod="goldmane-9f7667bb8-6m64d" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--6m64d-eth0" Apr 17 23:31:13.490974 containerd[1455]: 2026-04-17 23:31:13.426 [INFO][3965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5" Namespace="calico-system" Pod="goldmane-9f7667bb8-6m64d" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--6m64d-eth0" Apr 17 23:31:13.490974 containerd[1455]: 2026-04-17 23:31:13.432 [INFO][3965] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5" Namespace="calico-system" Pod="goldmane-9f7667bb8-6m64d" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--6m64d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--9f7667bb8--6m64d-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"7293e6d6-a88e-43ea-b7ee-a4239f4cde4f", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 30, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5", Pod:"goldmane-9f7667bb8-6m64d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2c47e5b1189", MAC:"e2:c1:5d:90:55:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:31:13.490974 containerd[1455]: 2026-04-17 23:31:13.453 [INFO][3965] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5" Namespace="calico-system" Pod="goldmane-9f7667bb8-6m64d" WorkloadEndpoint="localhost-k8s-goldmane--9f7667bb8--6m64d-eth0" Apr 17 23:31:13.510812 containerd[1455]: time="2026-04-17T23:31:13.510215291Z" level=info msg="CreateContainer within sandbox \"3ad4d997455a0ca7fd97cc2e71132755a353324169149344de81258d69fffcb5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9bd3c0389a5c8e1bc6806f4cd139ff3aaae9803b9aa61a6856016d1012e34aad\"" Apr 17 23:31:13.513862 containerd[1455]: time="2026-04-17T23:31:13.513831795Z" level=info msg="StartContainer for \"9bd3c0389a5c8e1bc6806f4cd139ff3aaae9803b9aa61a6856016d1012e34aad\"" Apr 17 23:31:13.520648 systemd-networkd[1380]: cali968478fa0c6: Gained IPv6LL Apr 17 23:31:13.526781 systemd[1]: Started cri-containerd-772851ebe7d48e5e1da3867604335a7a4a9ac32460e764073cdb5d0d02c42fee.scope - libcontainer container 772851ebe7d48e5e1da3867604335a7a4a9ac32460e764073cdb5d0d02c42fee. Apr 17 23:31:13.536498 systemd-resolved[1382]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:31:13.563471 containerd[1455]: time="2026-04-17T23:31:13.561110991Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:31:13.563471 containerd[1455]: time="2026-04-17T23:31:13.562653085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:31:13.563471 containerd[1455]: time="2026-04-17T23:31:13.562760015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:31:13.571231 containerd[1455]: time="2026-04-17T23:31:13.566731425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:31:13.588601 containerd[1455]: time="2026-04-17T23:31:13.587484002Z" level=info msg="StartContainer for \"772851ebe7d48e5e1da3867604335a7a4a9ac32460e764073cdb5d0d02c42fee\" returns successfully" Apr 17 23:31:13.625709 systemd[1]: Started cri-containerd-4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5.scope - libcontainer container 4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5. Apr 17 23:31:13.639869 containerd[1455]: time="2026-04-17T23:31:13.639628617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5f84d7f489-kwq9q,Uid:3364d307-610e-4ec3-b6b8-9438cae6845a,Namespace:calico-system,Attempt:0,} returns sandbox id \"51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095\"" Apr 17 23:31:13.643224 systemd[1]: Started cri-containerd-9bd3c0389a5c8e1bc6806f4cd139ff3aaae9803b9aa61a6856016d1012e34aad.scope - libcontainer container 9bd3c0389a5c8e1bc6806f4cd139ff3aaae9803b9aa61a6856016d1012e34aad. Apr 17 23:31:13.664939 systemd-resolved[1382]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:31:13.667392 kubelet[2507]: E0417 23:31:13.664910 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:13.762238 containerd[1455]: time="2026-04-17T23:31:13.761805001Z" level=info msg="StartContainer for \"9bd3c0389a5c8e1bc6806f4cd139ff3aaae9803b9aa61a6856016d1012e34aad\" returns successfully" Apr 17 23:31:13.766029 kubelet[2507]: I0417 23:31:13.765786 2507 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-c5kzw" podStartSLOduration=36.765773862 podStartE2EDuration="36.765773862s" podCreationTimestamp="2026-04-17 23:30:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:31:13.729787357 +0000 UTC m=+42.444992218" watchObservedRunningTime="2026-04-17 23:31:13.765773862 +0000 UTC m=+42.480978720" Apr 17 23:31:13.875454 systemd[1]: Created slice kubepods-besteffort-pod6407c09e_fed9_4cd8_8704_7d56e4e0bf69.slice - libcontainer container kubepods-besteffort-pod6407c09e_fed9_4cd8_8704_7d56e4e0bf69.slice. Apr 17 23:31:13.950650 containerd[1455]: time="2026-04-17T23:31:13.950588479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-6m64d,Uid:7293e6d6-a88e-43ea-b7ee-a4239f4cde4f,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5\"" Apr 17 23:31:13.953659 kubelet[2507]: I0417 23:31:13.953637 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6407c09e-fed9-4cd8-8704-7d56e4e0bf69-whisker-backend-key-pair\") pod \"whisker-7b74ffbd97-jrfzv\" (UID: \"6407c09e-fed9-4cd8-8704-7d56e4e0bf69\") " pod="calico-system/whisker-7b74ffbd97-jrfzv" Apr 17 23:31:13.953733 kubelet[2507]: I0417 23:31:13.953674 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/6407c09e-fed9-4cd8-8704-7d56e4e0bf69-nginx-config\") pod \"whisker-7b74ffbd97-jrfzv\" (UID: \"6407c09e-fed9-4cd8-8704-7d56e4e0bf69\") " pod="calico-system/whisker-7b74ffbd97-jrfzv" Apr 17 23:31:13.953733 kubelet[2507]: I0417 23:31:13.953686 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6407c09e-fed9-4cd8-8704-7d56e4e0bf69-whisker-ca-bundle\") pod \"whisker-7b74ffbd97-jrfzv\" (UID: \"6407c09e-fed9-4cd8-8704-7d56e4e0bf69\") " pod="calico-system/whisker-7b74ffbd97-jrfzv" Apr 17 23:31:13.953733 kubelet[2507]: I0417 23:31:13.953700 2507 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl6cf\" (UniqueName: \"kubernetes.io/projected/6407c09e-fed9-4cd8-8704-7d56e4e0bf69-kube-api-access-fl6cf\") pod \"whisker-7b74ffbd97-jrfzv\" (UID: \"6407c09e-fed9-4cd8-8704-7d56e4e0bf69\") " pod="calico-system/whisker-7b74ffbd97-jrfzv" Apr 17 23:31:13.979427 kernel: calico-node[4293]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:31:14.096761 systemd-networkd[1380]: calic771fb31d88: Gained IPv6LL Apr 17 23:31:14.199267 containerd[1455]: time="2026-04-17T23:31:14.199057711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b74ffbd97-jrfzv,Uid:6407c09e-fed9-4cd8-8704-7d56e4e0bf69,Namespace:calico-system,Attempt:0,}" Apr 17 23:31:14.453130 systemd[1]: Started sshd@8-10.0.0.28:22-10.0.0.1:52128.service - OpenSSH per-connection server daemon (10.0.0.1:52128). Apr 17 23:31:14.458970 systemd-networkd[1380]: cali872130939fe: Link UP Apr 17 23:31:14.461632 systemd-networkd[1380]: cali872130939fe: Gained carrier Apr 17 23:31:14.485244 containerd[1455]: 2026-04-17 23:31:14.317 [INFO][4542] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7b74ffbd97--jrfzv-eth0 whisker-7b74ffbd97- calico-system 6407c09e-fed9-4cd8-8704-7d56e4e0bf69 1084 0 2026-04-17 23:31:13 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7b74ffbd97 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7b74ffbd97-jrfzv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali872130939fe [] [] }} ContainerID="7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25" Namespace="calico-system" Pod="whisker-7b74ffbd97-jrfzv" WorkloadEndpoint="localhost-k8s-whisker--7b74ffbd97--jrfzv-" Apr 17 23:31:14.485244 containerd[1455]: 2026-04-17 23:31:14.318 [INFO][4542] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25" Namespace="calico-system" Pod="whisker-7b74ffbd97-jrfzv" WorkloadEndpoint="localhost-k8s-whisker--7b74ffbd97--jrfzv-eth0" Apr 17 23:31:14.485244 containerd[1455]: 2026-04-17 23:31:14.383 [INFO][4558] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25" HandleID="k8s-pod-network.7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25" Workload="localhost-k8s-whisker--7b74ffbd97--jrfzv-eth0" Apr 17 23:31:14.485244 containerd[1455]: 2026-04-17 23:31:14.395 [INFO][4558] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25" HandleID="k8s-pod-network.7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25" Workload="localhost-k8s-whisker--7b74ffbd97--jrfzv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139bb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7b74ffbd97-jrfzv", "timestamp":"2026-04-17 23:31:14.383506245 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00049cc60)} Apr 17 23:31:14.485244 containerd[1455]: 2026-04-17 23:31:14.395 [INFO][4558] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:31:14.485244 containerd[1455]: 2026-04-17 23:31:14.395 [INFO][4558] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:31:14.485244 containerd[1455]: 2026-04-17 23:31:14.395 [INFO][4558] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:31:14.485244 containerd[1455]: 2026-04-17 23:31:14.401 [INFO][4558] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25" host="localhost" Apr 17 23:31:14.485244 containerd[1455]: 2026-04-17 23:31:14.408 [INFO][4558] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:31:14.485244 containerd[1455]: 2026-04-17 23:31:14.418 [INFO][4558] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:31:14.485244 containerd[1455]: 2026-04-17 23:31:14.421 [INFO][4558] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:31:14.485244 containerd[1455]: 2026-04-17 23:31:14.424 [INFO][4558] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:31:14.485244 containerd[1455]: 2026-04-17 23:31:14.424 [INFO][4558] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25" host="localhost" Apr 17 23:31:14.485244 containerd[1455]: 2026-04-17 23:31:14.426 [INFO][4558] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25 Apr 17 23:31:14.485244 containerd[1455]: 2026-04-17 23:31:14.433 [INFO][4558] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25" host="localhost" Apr 17 23:31:14.485244 containerd[1455]: 2026-04-17 23:31:14.443 [INFO][4558] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25" host="localhost" Apr 17 23:31:14.485244 containerd[1455]: 2026-04-17 23:31:14.443 [INFO][4558] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25" host="localhost" Apr 17 23:31:14.485244 containerd[1455]: 2026-04-17 23:31:14.443 [INFO][4558] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:31:14.485244 containerd[1455]: 2026-04-17 23:31:14.443 [INFO][4558] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25" HandleID="k8s-pod-network.7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25" Workload="localhost-k8s-whisker--7b74ffbd97--jrfzv-eth0" Apr 17 23:31:14.485848 containerd[1455]: 2026-04-17 23:31:14.452 [INFO][4542] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25" Namespace="calico-system" Pod="whisker-7b74ffbd97-jrfzv" WorkloadEndpoint="localhost-k8s-whisker--7b74ffbd97--jrfzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7b74ffbd97--jrfzv-eth0", GenerateName:"whisker-7b74ffbd97-", Namespace:"calico-system", SelfLink:"", UID:"6407c09e-fed9-4cd8-8704-7d56e4e0bf69", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 31, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b74ffbd97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7b74ffbd97-jrfzv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali872130939fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:31:14.485848 containerd[1455]: 2026-04-17 23:31:14.452 [INFO][4542] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25" Namespace="calico-system" Pod="whisker-7b74ffbd97-jrfzv" WorkloadEndpoint="localhost-k8s-whisker--7b74ffbd97--jrfzv-eth0" Apr 17 23:31:14.485848 containerd[1455]: 2026-04-17 23:31:14.452 [INFO][4542] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali872130939fe ContainerID="7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25" Namespace="calico-system" Pod="whisker-7b74ffbd97-jrfzv" WorkloadEndpoint="localhost-k8s-whisker--7b74ffbd97--jrfzv-eth0" Apr 17 23:31:14.485848 containerd[1455]: 2026-04-17 23:31:14.462 [INFO][4542] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25" Namespace="calico-system" Pod="whisker-7b74ffbd97-jrfzv" WorkloadEndpoint="localhost-k8s-whisker--7b74ffbd97--jrfzv-eth0" Apr 17 23:31:14.485848 containerd[1455]: 2026-04-17 23:31:14.467 [INFO][4542] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25" Namespace="calico-system" Pod="whisker-7b74ffbd97-jrfzv" WorkloadEndpoint="localhost-k8s-whisker--7b74ffbd97--jrfzv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7b74ffbd97--jrfzv-eth0", GenerateName:"whisker-7b74ffbd97-", Namespace:"calico-system", SelfLink:"", UID:"6407c09e-fed9-4cd8-8704-7d56e4e0bf69", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 31, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b74ffbd97", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25", Pod:"whisker-7b74ffbd97-jrfzv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali872130939fe", MAC:"b6:a8:f9:b7:01:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:31:14.485848 containerd[1455]: 2026-04-17 23:31:14.482 [INFO][4542] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25" Namespace="calico-system" Pod="whisker-7b74ffbd97-jrfzv" WorkloadEndpoint="localhost-k8s-whisker--7b74ffbd97--jrfzv-eth0" Apr 17 23:31:14.529856 systemd-networkd[1380]: vxlan.calico: Link UP Apr 17 23:31:14.529861 systemd-networkd[1380]: vxlan.calico: Gained carrier Apr 17 23:31:14.548128 containerd[1455]: time="2026-04-17T23:31:14.547767107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:31:14.548128 containerd[1455]: time="2026-04-17T23:31:14.547863924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:31:14.548128 containerd[1455]: time="2026-04-17T23:31:14.547876274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:31:14.548128 containerd[1455]: time="2026-04-17T23:31:14.547976810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:31:14.556651 sshd[4581]: Accepted publickey for core from 10.0.0.1 port 52128 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:14.557762 sshd[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:14.565943 systemd-logind[1438]: New session 9 of user core. Apr 17 23:31:14.572628 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:31:14.577250 systemd[1]: Started cri-containerd-7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25.scope - libcontainer container 7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25. Apr 17 23:31:14.598673 systemd-resolved[1382]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:31:14.648397 containerd[1455]: time="2026-04-17T23:31:14.646648412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b74ffbd97-jrfzv,Uid:6407c09e-fed9-4cd8-8704-7d56e4e0bf69,Namespace:calico-system,Attempt:0,} returns sandbox id \"7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25\"" Apr 17 23:31:14.674831 kubelet[2507]: E0417 23:31:14.674457 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:14.679287 kubelet[2507]: E0417 23:31:14.679135 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:14.710253 kubelet[2507]: I0417 23:31:14.710144 2507 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-rnp9h" podStartSLOduration=37.710131131 podStartE2EDuration="37.710131131s" podCreationTimestamp="2026-04-17 23:30:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:31:14.690695788 +0000 UTC m=+43.405900645" watchObservedRunningTime="2026-04-17 23:31:14.710131131 +0000 UTC m=+43.425335987" Apr 17 23:31:14.800644 systemd-networkd[1380]: calibf9e3c682ac: Gained IPv6LL Apr 17 23:31:14.865694 systemd-networkd[1380]: calic954f3a874b: Gained IPv6LL Apr 17 23:31:14.867690 sshd[4581]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:14.872163 systemd[1]: sshd@8-10.0.0.28:22-10.0.0.1:52128.service: Deactivated successfully. Apr 17 23:31:14.872456 systemd-logind[1438]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:31:14.874479 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:31:14.878176 systemd-logind[1438]: Removed session 9. Apr 17 23:31:15.184666 systemd-networkd[1380]: cali3785399dd19: Gained IPv6LL Apr 17 23:31:15.376641 systemd-networkd[1380]: cali2c47e5b1189: Gained IPv6LL Apr 17 23:31:15.418321 kubelet[2507]: I0417 23:31:15.418261 2507 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4" path="/var/lib/kubelet/pods/6a5bfb7f-45ca-40c7-b4cf-b11fe52150a4/volumes" Apr 17 23:31:15.736970 kubelet[2507]: E0417 23:31:15.736931 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:15.738052 kubelet[2507]: E0417 23:31:15.737049 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:16.016596 systemd-networkd[1380]: vxlan.calico: Gained IPv6LL Apr 17 23:31:16.148101 systemd-networkd[1380]: cali872130939fe: Gained IPv6LL Apr 17 23:31:16.243815 containerd[1455]: time="2026-04-17T23:31:16.243482319Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:16.244710 containerd[1455]: time="2026-04-17T23:31:16.244523404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 17 23:31:16.247666 containerd[1455]: time="2026-04-17T23:31:16.247579145Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:16.252691 containerd[1455]: time="2026-04-17T23:31:16.252616381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:16.255122 containerd[1455]: time="2026-04-17T23:31:16.254982135Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 4.166196291s" Apr 17 23:31:16.255122 containerd[1455]: time="2026-04-17T23:31:16.255072729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 17 23:31:16.257475 containerd[1455]: time="2026-04-17T23:31:16.257361787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:31:16.276903 containerd[1455]: time="2026-04-17T23:31:16.276485419Z" level=info msg="CreateContainer within sandbox \"90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:31:16.302611 containerd[1455]: time="2026-04-17T23:31:16.302489237Z" level=info msg="CreateContainer within sandbox \"90558103ec4a1fb811aa3f579f6d1fd003651d3ecf5a91be76d3f4973b703948\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"39214113225cd77870008c673bfbd7b784a1d1bdfd6203f217a6b5fc6ca88575\"" Apr 17 23:31:16.305863 containerd[1455]: time="2026-04-17T23:31:16.303842383Z" level=info msg="StartContainer for \"39214113225cd77870008c673bfbd7b784a1d1bdfd6203f217a6b5fc6ca88575\"" Apr 17 23:31:16.353980 systemd[1]: Started cri-containerd-39214113225cd77870008c673bfbd7b784a1d1bdfd6203f217a6b5fc6ca88575.scope - libcontainer container 39214113225cd77870008c673bfbd7b784a1d1bdfd6203f217a6b5fc6ca88575. Apr 17 23:31:16.419191 containerd[1455]: time="2026-04-17T23:31:16.419051058Z" level=info msg="StartContainer for \"39214113225cd77870008c673bfbd7b784a1d1bdfd6203f217a6b5fc6ca88575\" returns successfully" Apr 17 23:31:16.747873 kubelet[2507]: E0417 23:31:16.747660 2507 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:31:16.771520 kubelet[2507]: I0417 23:31:16.770828 2507 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-857576c76b-6b74d" podStartSLOduration=24.60086945 podStartE2EDuration="28.77079664s" podCreationTimestamp="2026-04-17 23:30:48 +0000 UTC" firstStartedPulling="2026-04-17 23:31:12.086950006 +0000 UTC m=+40.802154852" lastFinishedPulling="2026-04-17 23:31:16.256877196 +0000 UTC m=+44.972082042" observedRunningTime="2026-04-17 23:31:16.768365477 +0000 UTC m=+45.483570328" watchObservedRunningTime="2026-04-17 23:31:16.77079664 +0000 UTC m=+45.486001508" Apr 17 23:31:17.783528 systemd[1]: run-containerd-runc-k8s.io-39214113225cd77870008c673bfbd7b784a1d1bdfd6203f217a6b5fc6ca88575-runc.UxdwJX.mount: Deactivated successfully. Apr 17 23:31:18.147577 containerd[1455]: time="2026-04-17T23:31:18.147236065Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:18.148233 containerd[1455]: time="2026-04-17T23:31:18.148145410Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 17 23:31:18.150091 containerd[1455]: time="2026-04-17T23:31:18.149938190Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:18.152109 containerd[1455]: time="2026-04-17T23:31:18.151915718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:18.153054 containerd[1455]: time="2026-04-17T23:31:18.152958777Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.89557262s" Apr 17 23:31:18.153054 containerd[1455]: time="2026-04-17T23:31:18.153023349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 17 23:31:18.155361 containerd[1455]: time="2026-04-17T23:31:18.155102819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:31:18.160995 containerd[1455]: time="2026-04-17T23:31:18.160821720Z" level=info msg="CreateContainer within sandbox \"ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:31:18.183370 containerd[1455]: time="2026-04-17T23:31:18.183241790Z" level=info msg="CreateContainer within sandbox \"ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"18d1de2bfe0442bcc81d3a30a9d6d86339525b40c7af5b3bb4396d7325d046fa\"" Apr 17 23:31:18.186129 containerd[1455]: time="2026-04-17T23:31:18.186007292Z" level=info msg="StartContainer for \"18d1de2bfe0442bcc81d3a30a9d6d86339525b40c7af5b3bb4396d7325d046fa\"" Apr 17 23:31:18.234859 systemd[1]: Started cri-containerd-18d1de2bfe0442bcc81d3a30a9d6d86339525b40c7af5b3bb4396d7325d046fa.scope - libcontainer container 18d1de2bfe0442bcc81d3a30a9d6d86339525b40c7af5b3bb4396d7325d046fa. Apr 17 23:31:18.265257 containerd[1455]: time="2026-04-17T23:31:18.265227658Z" level=info msg="StartContainer for \"18d1de2bfe0442bcc81d3a30a9d6d86339525b40c7af5b3bb4396d7325d046fa\" returns successfully" Apr 17 23:31:19.882009 systemd[1]: Started sshd@9-10.0.0.28:22-10.0.0.1:37524.service - OpenSSH per-connection server daemon (10.0.0.1:37524). Apr 17 23:31:19.916219 sshd[4864]: Accepted publickey for core from 10.0.0.1 port 37524 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:19.916768 sshd[4864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:19.929212 systemd-logind[1438]: New session 10 of user core. Apr 17 23:31:19.936991 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:31:20.089773 sshd[4864]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:20.093237 systemd[1]: sshd@9-10.0.0.28:22-10.0.0.1:37524.service: Deactivated successfully. Apr 17 23:31:20.094783 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:31:20.096713 systemd-logind[1438]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:31:20.098958 systemd-logind[1438]: Removed session 10. Apr 17 23:31:21.313802 containerd[1455]: time="2026-04-17T23:31:21.313646468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:21.315482 containerd[1455]: time="2026-04-17T23:31:21.315113909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 17 23:31:21.316651 containerd[1455]: time="2026-04-17T23:31:21.316415059Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:21.319626 containerd[1455]: time="2026-04-17T23:31:21.319411056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:21.319988 containerd[1455]: time="2026-04-17T23:31:21.319934940Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.164809545s" Apr 17 23:31:21.320055 containerd[1455]: time="2026-04-17T23:31:21.319988519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:31:21.321879 containerd[1455]: time="2026-04-17T23:31:21.321649162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:31:21.327063 containerd[1455]: time="2026-04-17T23:31:21.326974845Z" level=info msg="CreateContainer within sandbox \"8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:31:21.355258 containerd[1455]: time="2026-04-17T23:31:21.354984075Z" level=info msg="CreateContainer within sandbox \"8c9b2ea11bff3ae14ef98cf4636de053ab59d0bdb33d393fe9c14a1374544fd8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4a88a703cd2ae018adcc0715503cfe890aba1efe980599b11482eecb1bb03d83\"" Apr 17 23:31:21.357257 containerd[1455]: time="2026-04-17T23:31:21.356941827Z" level=info msg="StartContainer for \"4a88a703cd2ae018adcc0715503cfe890aba1efe980599b11482eecb1bb03d83\"" Apr 17 23:31:21.428090 systemd[1]: Started cri-containerd-4a88a703cd2ae018adcc0715503cfe890aba1efe980599b11482eecb1bb03d83.scope - libcontainer container 4a88a703cd2ae018adcc0715503cfe890aba1efe980599b11482eecb1bb03d83. Apr 17 23:31:21.501035 containerd[1455]: time="2026-04-17T23:31:21.500700599Z" level=info msg="StartContainer for \"4a88a703cd2ae018adcc0715503cfe890aba1efe980599b11482eecb1bb03d83\" returns successfully" Apr 17 23:31:21.783035 containerd[1455]: time="2026-04-17T23:31:21.782673974Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:21.786503 containerd[1455]: time="2026-04-17T23:31:21.786390073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:31:21.810108 containerd[1455]: time="2026-04-17T23:31:21.808745243Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 487.052489ms" Apr 17 23:31:21.811869 containerd[1455]: time="2026-04-17T23:31:21.811134217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:31:21.820763 containerd[1455]: time="2026-04-17T23:31:21.820640802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:31:21.834226 containerd[1455]: time="2026-04-17T23:31:21.834176490Z" level=info msg="CreateContainer within sandbox \"51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:31:21.945421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount955761648.mount: Deactivated successfully. Apr 17 23:31:21.947783 containerd[1455]: time="2026-04-17T23:31:21.947729055Z" level=info msg="CreateContainer within sandbox \"51ef3732063fc33243500b2f09bbcbb3d1d6e954d59053e6df4037c72ec80095\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fb0bbb22b1f550a895d4f8076a981b0f1ec8164f751db46ba18172491894fc07\"" Apr 17 23:31:21.949142 containerd[1455]: time="2026-04-17T23:31:21.948860018Z" level=info msg="StartContainer for \"fb0bbb22b1f550a895d4f8076a981b0f1ec8164f751db46ba18172491894fc07\"" Apr 17 23:31:22.010676 systemd[1]: Started cri-containerd-fb0bbb22b1f550a895d4f8076a981b0f1ec8164f751db46ba18172491894fc07.scope - libcontainer container fb0bbb22b1f550a895d4f8076a981b0f1ec8164f751db46ba18172491894fc07. Apr 17 23:31:22.174032 containerd[1455]: time="2026-04-17T23:31:22.173596165Z" level=info msg="StartContainer for \"fb0bbb22b1f550a895d4f8076a981b0f1ec8164f751db46ba18172491894fc07\" returns successfully" Apr 17 23:31:22.828795 kubelet[2507]: I0417 23:31:22.828242 2507 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:31:22.857288 kubelet[2507]: I0417 23:31:22.857084 2507 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-5f84d7f489-rm5t6" podStartSLOduration=27.739405767 podStartE2EDuration="35.857069624s" podCreationTimestamp="2026-04-17 23:30:47 +0000 UTC" firstStartedPulling="2026-04-17 23:31:13.203700837 +0000 UTC m=+41.918905683" lastFinishedPulling="2026-04-17 23:31:21.321364693 +0000 UTC m=+50.036569540" observedRunningTime="2026-04-17 23:31:21.827844047 +0000 UTC m=+50.543048909" watchObservedRunningTime="2026-04-17 23:31:22.857069624 +0000 UTC m=+51.572274469" Apr 17 23:31:23.838153 kubelet[2507]: I0417 23:31:23.838053 2507 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:31:23.838698 kubelet[2507]: I0417 23:31:23.838418 2507 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:31:24.771054 kubelet[2507]: I0417 23:31:24.770950 2507 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-5f84d7f489-kwq9q" podStartSLOduration=29.598928508 podStartE2EDuration="37.770936568s" podCreationTimestamp="2026-04-17 23:30:47 +0000 UTC" firstStartedPulling="2026-04-17 23:31:13.645085333 +0000 UTC m=+42.360290180" lastFinishedPulling="2026-04-17 23:31:21.817093371 +0000 UTC m=+50.532298240" observedRunningTime="2026-04-17 23:31:22.866743927 +0000 UTC m=+51.581948783" watchObservedRunningTime="2026-04-17 23:31:24.770936568 +0000 UTC m=+53.486141425" Apr 17 23:31:25.115378 systemd[1]: Started sshd@10-10.0.0.28:22-10.0.0.1:37540.service - OpenSSH per-connection server daemon (10.0.0.1:37540). Apr 17 23:31:25.324371 sshd[5010]: Accepted publickey for core from 10.0.0.1 port 37540 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:25.324935 sshd[5010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:25.336245 systemd-logind[1438]: New session 11 of user core. Apr 17 23:31:25.343705 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:31:25.823729 sshd[5010]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:25.833776 systemd[1]: sshd@10-10.0.0.28:22-10.0.0.1:37540.service: Deactivated successfully. Apr 17 23:31:25.839237 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:31:25.843462 systemd-logind[1438]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:31:25.854003 systemd[1]: Started sshd@11-10.0.0.28:22-10.0.0.1:37544.service - OpenSSH per-connection server daemon (10.0.0.1:37544). Apr 17 23:31:25.856231 systemd-logind[1438]: Removed session 11. Apr 17 23:31:25.918644 sshd[5043]: Accepted publickey for core from 10.0.0.1 port 37544 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:25.920879 sshd[5043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:25.928150 systemd-logind[1438]: New session 12 of user core. Apr 17 23:31:25.932264 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:31:26.064130 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3291462372.mount: Deactivated successfully. Apr 17 23:31:26.355086 sshd[5043]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:26.376066 systemd[1]: sshd@11-10.0.0.28:22-10.0.0.1:37544.service: Deactivated successfully. Apr 17 23:31:26.453091 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:31:26.457057 systemd-logind[1438]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:31:26.473184 systemd[1]: Started sshd@12-10.0.0.28:22-10.0.0.1:37554.service - OpenSSH per-connection server daemon (10.0.0.1:37554). Apr 17 23:31:26.478920 systemd-logind[1438]: Removed session 12. Apr 17 23:31:26.558138 sshd[5060]: Accepted publickey for core from 10.0.0.1 port 37554 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:26.560213 sshd[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:26.573106 systemd-logind[1438]: New session 13 of user core. Apr 17 23:31:26.579803 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:31:26.802086 sshd[5060]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:26.805761 systemd-logind[1438]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:31:26.806191 systemd[1]: sshd@12-10.0.0.28:22-10.0.0.1:37554.service: Deactivated successfully. Apr 17 23:31:26.809732 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:31:26.811201 systemd-logind[1438]: Removed session 13. Apr 17 23:31:27.092715 containerd[1455]: time="2026-04-17T23:31:27.091391777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:27.094419 containerd[1455]: time="2026-04-17T23:31:27.093802660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 17 23:31:27.099150 containerd[1455]: time="2026-04-17T23:31:27.098726296Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:27.106365 containerd[1455]: time="2026-04-17T23:31:27.106052040Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:27.139022 containerd[1455]: time="2026-04-17T23:31:27.138920106Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 5.318240649s" Apr 17 23:31:27.139022 containerd[1455]: time="2026-04-17T23:31:27.138962364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 17 23:31:27.142031 containerd[1455]: time="2026-04-17T23:31:27.141112310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:31:27.154055 containerd[1455]: time="2026-04-17T23:31:27.154009099Z" level=info msg="CreateContainer within sandbox \"4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:31:27.200764 containerd[1455]: time="2026-04-17T23:31:27.200681278Z" level=info msg="CreateContainer within sandbox \"4d52f3d6d23a2ac7e6590b90efe2e41ef295d79166e99c68970c18d57ed600f5\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b22a305f24f8fd5c4b7aca3201f0be01243610d19a75768f8ccbfe2d23e16b40\"" Apr 17 23:31:27.201772 containerd[1455]: time="2026-04-17T23:31:27.201751319Z" level=info msg="StartContainer for \"b22a305f24f8fd5c4b7aca3201f0be01243610d19a75768f8ccbfe2d23e16b40\"" Apr 17 23:31:27.333059 systemd[1]: Started cri-containerd-b22a305f24f8fd5c4b7aca3201f0be01243610d19a75768f8ccbfe2d23e16b40.scope - libcontainer container b22a305f24f8fd5c4b7aca3201f0be01243610d19a75768f8ccbfe2d23e16b40. Apr 17 23:31:27.489101 containerd[1455]: time="2026-04-17T23:31:27.488634156Z" level=info msg="StartContainer for \"b22a305f24f8fd5c4b7aca3201f0be01243610d19a75768f8ccbfe2d23e16b40\" returns successfully" Apr 17 23:31:27.898835 systemd[1]: run-containerd-runc-k8s.io-b22a305f24f8fd5c4b7aca3201f0be01243610d19a75768f8ccbfe2d23e16b40-runc.BKTOni.mount: Deactivated successfully. Apr 17 23:31:28.995221 systemd[1]: run-containerd-runc-k8s.io-b22a305f24f8fd5c4b7aca3201f0be01243610d19a75768f8ccbfe2d23e16b40-runc.oh1M4a.mount: Deactivated successfully. Apr 17 23:31:29.168419 containerd[1455]: time="2026-04-17T23:31:29.168253822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:29.172277 containerd[1455]: time="2026-04-17T23:31:29.172173238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 17 23:31:29.174755 containerd[1455]: time="2026-04-17T23:31:29.174388202Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:29.179362 containerd[1455]: time="2026-04-17T23:31:29.177692364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:29.179362 containerd[1455]: time="2026-04-17T23:31:29.178968796Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.037817141s" Apr 17 23:31:29.179362 containerd[1455]: time="2026-04-17T23:31:29.179070874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 17 23:31:29.181403 containerd[1455]: time="2026-04-17T23:31:29.181101889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:31:29.194487 containerd[1455]: time="2026-04-17T23:31:29.194420822Z" level=info msg="CreateContainer within sandbox \"7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:31:29.218836 containerd[1455]: time="2026-04-17T23:31:29.218699468Z" level=info msg="CreateContainer within sandbox \"7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ec870f07521714bdc56da184e9b3065274d0f693b94c731eaf815c8f41ec02f2\"" Apr 17 23:31:29.221569 containerd[1455]: time="2026-04-17T23:31:29.221466232Z" level=info msg="StartContainer for \"ec870f07521714bdc56da184e9b3065274d0f693b94c731eaf815c8f41ec02f2\"" Apr 17 23:31:29.279610 systemd[1]: Started cri-containerd-ec870f07521714bdc56da184e9b3065274d0f693b94c731eaf815c8f41ec02f2.scope - libcontainer container ec870f07521714bdc56da184e9b3065274d0f693b94c731eaf815c8f41ec02f2. Apr 17 23:31:29.340136 containerd[1455]: time="2026-04-17T23:31:29.340027578Z" level=info msg="StartContainer for \"ec870f07521714bdc56da184e9b3065274d0f693b94c731eaf815c8f41ec02f2\" returns successfully" Apr 17 23:31:29.983713 systemd[1]: run-containerd-runc-k8s.io-ec870f07521714bdc56da184e9b3065274d0f693b94c731eaf815c8f41ec02f2-runc.uZ1eZY.mount: Deactivated successfully. Apr 17 23:31:30.789464 containerd[1455]: time="2026-04-17T23:31:30.789263414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:30.790671 containerd[1455]: time="2026-04-17T23:31:30.790597317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 17 23:31:30.792029 containerd[1455]: time="2026-04-17T23:31:30.791977276Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:30.795489 containerd[1455]: time="2026-04-17T23:31:30.795402826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:30.796472 containerd[1455]: time="2026-04-17T23:31:30.796371909Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.614892197s" Apr 17 23:31:30.796472 containerd[1455]: time="2026-04-17T23:31:30.796401822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 17 23:31:30.800170 containerd[1455]: time="2026-04-17T23:31:30.799243273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:31:30.804271 containerd[1455]: time="2026-04-17T23:31:30.804183468Z" level=info msg="CreateContainer within sandbox \"ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:31:30.827083 containerd[1455]: time="2026-04-17T23:31:30.827007761Z" level=info msg="CreateContainer within sandbox \"ad9901486a5bcfec68c0d58cf07964f918c9abbc2da07f04905592d458eb2c0a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"62302195ee3bf1241fb2f6178133d7293d326971695e647df5cc435e34fbce11\"" Apr 17 23:31:30.830084 containerd[1455]: time="2026-04-17T23:31:30.828131518Z" level=info msg="StartContainer for \"62302195ee3bf1241fb2f6178133d7293d326971695e647df5cc435e34fbce11\"" Apr 17 23:31:30.879752 systemd[1]: Started cri-containerd-62302195ee3bf1241fb2f6178133d7293d326971695e647df5cc435e34fbce11.scope - libcontainer container 62302195ee3bf1241fb2f6178133d7293d326971695e647df5cc435e34fbce11. Apr 17 23:31:30.927801 containerd[1455]: time="2026-04-17T23:31:30.927389002Z" level=info msg="StartContainer for \"62302195ee3bf1241fb2f6178133d7293d326971695e647df5cc435e34fbce11\" returns successfully" Apr 17 23:31:31.543069 kubelet[2507]: I0417 23:31:31.542962 2507 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:31:31.543069 kubelet[2507]: I0417 23:31:31.543036 2507 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:31:31.836116 systemd[1]: Started sshd@13-10.0.0.28:22-10.0.0.1:33772.service - OpenSSH per-connection server daemon (10.0.0.1:33772). Apr 17 23:31:31.963237 kubelet[2507]: I0417 23:31:31.963122 2507 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-6m64d" podStartSLOduration=31.781174451 podStartE2EDuration="44.963104037s" podCreationTimestamp="2026-04-17 23:30:47 +0000 UTC" firstStartedPulling="2026-04-17 23:31:13.958481651 +0000 UTC m=+42.673686497" lastFinishedPulling="2026-04-17 23:31:27.140411233 +0000 UTC m=+55.855616083" observedRunningTime="2026-04-17 23:31:27.880537526 +0000 UTC m=+56.595742376" watchObservedRunningTime="2026-04-17 23:31:31.963104037 +0000 UTC m=+60.678308894" Apr 17 23:31:31.997367 sshd[5249]: Accepted publickey for core from 10.0.0.1 port 33772 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:31.998853 sshd[5249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:32.005089 systemd-logind[1438]: New session 14 of user core. Apr 17 23:31:32.013980 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:31:32.247058 sshd[5249]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:32.253803 systemd[1]: sshd@13-10.0.0.28:22-10.0.0.1:33772.service: Deactivated successfully. Apr 17 23:31:32.255600 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:31:32.257153 systemd-logind[1438]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:31:32.263078 systemd[1]: Started sshd@14-10.0.0.28:22-10.0.0.1:33776.service - OpenSSH per-connection server daemon (10.0.0.1:33776). Apr 17 23:31:32.265093 systemd-logind[1438]: Removed session 14. Apr 17 23:31:32.309733 sshd[5264]: Accepted publickey for core from 10.0.0.1 port 33776 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:32.311566 sshd[5264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:32.315979 systemd-logind[1438]: New session 15 of user core. Apr 17 23:31:32.321781 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:31:32.684239 sshd[5264]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:32.691190 systemd[1]: sshd@14-10.0.0.28:22-10.0.0.1:33776.service: Deactivated successfully. Apr 17 23:31:32.692804 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:31:32.694036 systemd-logind[1438]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:31:32.699719 systemd[1]: Started sshd@15-10.0.0.28:22-10.0.0.1:33780.service - OpenSSH per-connection server daemon (10.0.0.1:33780). Apr 17 23:31:32.703413 systemd-logind[1438]: Removed session 15. Apr 17 23:31:32.752124 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 33780 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:32.753589 sshd[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:32.758790 systemd-logind[1438]: New session 16 of user core. Apr 17 23:31:32.768677 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:31:33.569902 sshd[5276]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:33.586776 systemd[1]: Started sshd@16-10.0.0.28:22-10.0.0.1:33786.service - OpenSSH per-connection server daemon (10.0.0.1:33786). Apr 17 23:31:33.587098 systemd[1]: sshd@15-10.0.0.28:22-10.0.0.1:33780.service: Deactivated successfully. Apr 17 23:31:33.591845 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:31:33.602036 systemd-logind[1438]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:31:33.610194 systemd-logind[1438]: Removed session 16. Apr 17 23:31:33.721630 sshd[5300]: Accepted publickey for core from 10.0.0.1 port 33786 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:33.722793 sshd[5300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:33.730639 systemd-logind[1438]: New session 17 of user core. Apr 17 23:31:33.736490 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:31:33.822026 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2831493803.mount: Deactivated successfully. Apr 17 23:31:33.852607 containerd[1455]: time="2026-04-17T23:31:33.852523423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:33.855665 containerd[1455]: time="2026-04-17T23:31:33.855526185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 17 23:31:33.857678 containerd[1455]: time="2026-04-17T23:31:33.856822402Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:33.860520 containerd[1455]: time="2026-04-17T23:31:33.860439181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:31:33.863168 containerd[1455]: time="2026-04-17T23:31:33.862981268Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 3.063706097s" Apr 17 23:31:33.863168 containerd[1455]: time="2026-04-17T23:31:33.863045783Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 17 23:31:33.871850 containerd[1455]: time="2026-04-17T23:31:33.871673102Z" level=info msg="CreateContainer within sandbox \"7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:31:33.950388 containerd[1455]: time="2026-04-17T23:31:33.949383861Z" level=info msg="CreateContainer within sandbox \"7bed1919466d59c35b56d7f5395c6f54cb77fd8d085477cbd154796532f0cf25\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"c09b4c0fe54597124b5384338e051434c5d47368df8309a2c957f0b5162479a6\"" Apr 17 23:31:33.952190 containerd[1455]: time="2026-04-17T23:31:33.952007423Z" level=info msg="StartContainer for \"c09b4c0fe54597124b5384338e051434c5d47368df8309a2c957f0b5162479a6\"" Apr 17 23:31:34.014713 systemd[1]: Started cri-containerd-c09b4c0fe54597124b5384338e051434c5d47368df8309a2c957f0b5162479a6.scope - libcontainer container c09b4c0fe54597124b5384338e051434c5d47368df8309a2c957f0b5162479a6. Apr 17 23:31:34.098714 containerd[1455]: time="2026-04-17T23:31:34.098479749Z" level=info msg="StartContainer for \"c09b4c0fe54597124b5384338e051434c5d47368df8309a2c957f0b5162479a6\" returns successfully" Apr 17 23:31:34.323172 sshd[5300]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:34.335429 systemd[1]: sshd@16-10.0.0.28:22-10.0.0.1:33786.service: Deactivated successfully. Apr 17 23:31:34.338900 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:31:34.340827 systemd-logind[1438]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:31:34.353396 systemd[1]: Started sshd@17-10.0.0.28:22-10.0.0.1:33794.service - OpenSSH per-connection server daemon (10.0.0.1:33794). Apr 17 23:31:34.361776 systemd-logind[1438]: Removed session 17. Apr 17 23:31:34.408797 sshd[5363]: Accepted publickey for core from 10.0.0.1 port 33794 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:34.410258 sshd[5363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:34.416103 systemd-logind[1438]: New session 18 of user core. Apr 17 23:31:34.430078 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:31:34.602420 sshd[5363]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:34.606894 systemd[1]: sshd@17-10.0.0.28:22-10.0.0.1:33794.service: Deactivated successfully. Apr 17 23:31:34.609406 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:31:34.612248 systemd-logind[1438]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:31:34.614277 systemd-logind[1438]: Removed session 18. Apr 17 23:31:35.006798 kubelet[2507]: I0417 23:31:35.006238 2507 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-ktqpr" podStartSLOduration=28.464828324 podStartE2EDuration="47.006216094s" podCreationTimestamp="2026-04-17 23:30:48 +0000 UTC" firstStartedPulling="2026-04-17 23:31:12.25652755 +0000 UTC m=+40.971732396" lastFinishedPulling="2026-04-17 23:31:30.797915303 +0000 UTC m=+59.513120166" observedRunningTime="2026-04-17 23:31:31.963484426 +0000 UTC m=+60.678689273" watchObservedRunningTime="2026-04-17 23:31:35.006216094 +0000 UTC m=+63.721420957" Apr 17 23:31:35.010378 kubelet[2507]: I0417 23:31:35.010234 2507 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-7b74ffbd97-jrfzv" podStartSLOduration=2.795862843 podStartE2EDuration="22.010214993s" podCreationTimestamp="2026-04-17 23:31:13 +0000 UTC" firstStartedPulling="2026-04-17 23:31:14.650911521 +0000 UTC m=+43.366116368" lastFinishedPulling="2026-04-17 23:31:33.865263664 +0000 UTC m=+62.580468518" observedRunningTime="2026-04-17 23:31:35.008147593 +0000 UTC m=+63.723352445" watchObservedRunningTime="2026-04-17 23:31:35.010214993 +0000 UTC m=+63.725419839" Apr 17 23:31:39.620882 systemd[1]: Started sshd@18-10.0.0.28:22-10.0.0.1:47608.service - OpenSSH per-connection server daemon (10.0.0.1:47608). Apr 17 23:31:39.658394 sshd[5401]: Accepted publickey for core from 10.0.0.1 port 47608 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:39.659696 sshd[5401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:39.666555 systemd-logind[1438]: New session 19 of user core. Apr 17 23:31:39.675020 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:31:39.895227 sshd[5401]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:39.902054 systemd[1]: sshd@18-10.0.0.28:22-10.0.0.1:47608.service: Deactivated successfully. Apr 17 23:31:39.904487 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:31:39.905834 systemd-logind[1438]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:31:39.907252 systemd-logind[1438]: Removed session 19. Apr 17 23:31:42.592952 systemd[1]: run-containerd-runc-k8s.io-39214113225cd77870008c673bfbd7b784a1d1bdfd6203f217a6b5fc6ca88575-runc.vxUMEC.mount: Deactivated successfully. Apr 17 23:31:44.911852 systemd[1]: Started sshd@19-10.0.0.28:22-10.0.0.1:47610.service - OpenSSH per-connection server daemon (10.0.0.1:47610). Apr 17 23:31:44.966276 sshd[5464]: Accepted publickey for core from 10.0.0.1 port 47610 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:44.968065 sshd[5464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:44.973016 systemd-logind[1438]: New session 20 of user core. Apr 17 23:31:44.982086 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:31:45.192755 sshd[5464]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:45.196806 systemd[1]: sshd@19-10.0.0.28:22-10.0.0.1:47610.service: Deactivated successfully. Apr 17 23:31:45.198501 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:31:45.199494 systemd-logind[1438]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:31:45.202477 systemd-logind[1438]: Removed session 20. Apr 17 23:31:45.905356 kubelet[2507]: I0417 23:31:45.905112 2507 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:31:50.205474 systemd[1]: Started sshd@20-10.0.0.28:22-10.0.0.1:53006.service - OpenSSH per-connection server daemon (10.0.0.1:53006). Apr 17 23:31:50.298789 sshd[5504]: Accepted publickey for core from 10.0.0.1 port 53006 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:31:50.301171 sshd[5504]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:31:50.306414 systemd-logind[1438]: New session 21 of user core. Apr 17 23:31:50.310499 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:31:50.660223 sshd[5504]: pam_unix(sshd:session): session closed for user core Apr 17 23:31:50.663224 systemd[1]: sshd@20-10.0.0.28:22-10.0.0.1:53006.service: Deactivated successfully. Apr 17 23:31:50.666010 systemd-logind[1438]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:31:50.667158 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:31:50.672552 systemd-logind[1438]: Removed session 21.