Apr 17 23:32:58.866434 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:32:58.866465 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:32:58.866479 kernel: BIOS-provided physical RAM map: Apr 17 23:32:58.866487 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 17 23:32:58.866495 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 17 23:32:58.866502 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 17 23:32:58.866512 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 17 23:32:58.866520 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 17 23:32:58.866528 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 17 23:32:58.866540 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 17 23:32:58.866548 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 17 23:32:58.866556 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 17 23:32:58.866564 kernel: NX (Execute Disable) protection: active Apr 17 23:32:58.866572 kernel: APIC: Static calls initialized Apr 17 23:32:58.866582 kernel: SMBIOS 2.8 present. Apr 17 23:32:58.866593 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 17 23:32:58.866602 kernel: Hypervisor detected: KVM Apr 17 23:32:58.866611 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:32:58.866619 kernel: kvm-clock: using sched offset of 5863601974 cycles Apr 17 23:32:58.866628 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:32:58.866637 kernel: tsc: Detected 2793.438 MHz processor Apr 17 23:32:58.866646 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:32:58.866655 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:32:58.866664 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 17 23:32:58.866676 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 17 23:32:58.866685 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:32:58.866694 kernel: Using GB pages for direct mapping Apr 17 23:32:58.866702 kernel: ACPI: Early table checksum verification disabled Apr 17 23:32:58.866711 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 17 23:32:58.866720 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:32:58.866729 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:32:58.866738 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:32:58.866746 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 17 23:32:58.866757 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:32:58.866766 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:32:58.866775 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:32:58.866783 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 17 23:32:58.866792 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 17 23:32:58.866801 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 17 23:32:58.866830 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 17 23:32:58.866844 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 17 23:32:58.866856 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 17 23:32:58.866865 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 17 23:32:58.866875 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 17 23:32:58.866884 kernel: No NUMA configuration found Apr 17 23:32:58.866893 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 17 23:32:58.866902 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 17 23:32:58.866914 kernel: Zone ranges: Apr 17 23:32:58.866924 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:32:58.866933 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 17 23:32:58.866942 kernel: Normal empty Apr 17 23:32:58.866951 kernel: Movable zone start for each node Apr 17 23:32:58.866961 kernel: Early memory node ranges Apr 17 23:32:58.866970 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 17 23:32:58.866979 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 17 23:32:58.866988 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 17 23:32:58.866997 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:32:58.867008 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 17 23:32:58.867018 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 17 23:32:58.867027 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 17 23:32:58.867037 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:32:58.867046 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 17 23:32:58.867055 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 17 23:32:58.867065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:32:58.867074 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:32:58.867083 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:32:58.867094 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:32:58.867104 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:32:58.867113 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 23:32:58.867122 kernel: TSC deadline timer available Apr 17 23:32:58.867132 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 17 23:32:58.867141 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:32:58.867150 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 17 23:32:58.867159 kernel: kvm-guest: setup PV sched yield Apr 17 23:32:58.867169 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 17 23:32:58.867180 kernel: Booting paravirtualized kernel on KVM Apr 17 23:32:58.867189 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:32:58.867198 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 17 23:32:58.867207 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 17 23:32:58.867217 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 17 23:32:58.867226 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 17 23:32:58.867235 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:32:58.867245 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:32:58.867255 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:32:58.867267 kernel: random: crng init done Apr 17 23:32:58.867275 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 17 23:32:58.867285 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:32:58.867294 kernel: Fallback order for Node 0: 0 Apr 17 23:32:58.867303 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 17 23:32:58.867343 kernel: Policy zone: DMA32 Apr 17 23:32:58.867353 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:32:58.867363 kernel: Memory: 2433652K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 137896K reserved, 0K cma-reserved) Apr 17 23:32:58.867376 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 17 23:32:58.867385 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:32:58.867394 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:32:58.867403 kernel: Dynamic Preempt: voluntary Apr 17 23:32:58.867413 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:32:58.867476 kernel: rcu: RCU event tracing is enabled. Apr 17 23:32:58.867487 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 17 23:32:58.867496 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:32:58.867506 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:32:58.867517 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:32:58.867527 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:32:58.867537 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 17 23:32:58.867545 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 17 23:32:58.867555 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:32:58.867564 kernel: Console: colour VGA+ 80x25 Apr 17 23:32:58.867573 kernel: printk: console [ttyS0] enabled Apr 17 23:32:58.867582 kernel: ACPI: Core revision 20230628 Apr 17 23:32:58.867592 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 17 23:32:58.867603 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:32:58.867613 kernel: x2apic enabled Apr 17 23:32:58.867622 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:32:58.867631 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 17 23:32:58.867641 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 17 23:32:58.867650 kernel: kvm-guest: setup PV IPIs Apr 17 23:32:58.867659 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 17 23:32:58.867669 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 23:32:58.867687 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 17 23:32:58.867697 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 17 23:32:58.867707 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 17 23:32:58.867717 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 17 23:32:58.867729 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:32:58.867738 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:32:58.867749 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:32:58.867759 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 23:32:58.867771 kernel: RETBleed: Vulnerable Apr 17 23:32:58.867781 kernel: Speculative Store Bypass: Vulnerable Apr 17 23:32:58.867791 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:32:58.867801 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 23:32:58.867829 kernel: active return thunk: its_return_thunk Apr 17 23:32:58.867840 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:32:58.867850 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:32:58.867860 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:32:58.867870 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:32:58.867883 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 23:32:58.867893 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 23:32:58.867903 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 23:32:58.867912 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:32:58.867923 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 17 23:32:58.867932 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 17 23:32:58.867943 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 17 23:32:58.867953 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 17 23:32:58.867963 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:32:58.867974 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:32:58.867984 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:32:58.867993 kernel: landlock: Up and running. Apr 17 23:32:58.868002 kernel: SELinux: Initializing. Apr 17 23:32:58.868010 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:32:58.868019 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 17 23:32:58.868028 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 17 23:32:58.868037 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:32:58.868047 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:32:58.868059 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 17 23:32:58.868069 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 17 23:32:58.868079 kernel: signal: max sigframe size: 3632 Apr 17 23:32:58.868089 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:32:58.868099 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:32:58.868109 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:32:58.868119 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:32:58.868128 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:32:58.868137 kernel: .... node #0, CPUs: #1 #2 #3 Apr 17 23:32:58.868149 kernel: smp: Brought up 1 node, 4 CPUs Apr 17 23:32:58.868157 kernel: smpboot: Max logical packages: 1 Apr 17 23:32:58.868167 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 17 23:32:58.868176 kernel: devtmpfs: initialized Apr 17 23:32:58.868185 kernel: x86/mm: Memory block size: 128MB Apr 17 23:32:58.868194 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:32:58.868204 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 17 23:32:58.868214 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:32:58.868223 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:32:58.868234 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:32:58.868243 kernel: audit: type=2000 audit(1776468777.839:1): state=initialized audit_enabled=0 res=1 Apr 17 23:32:58.868252 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:32:58.868262 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:32:58.868271 kernel: cpuidle: using governor menu Apr 17 23:32:58.868281 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:32:58.868290 kernel: dca service started, version 1.12.1 Apr 17 23:32:58.868300 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 17 23:32:58.868343 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 17 23:32:58.868355 kernel: PCI: Using configuration type 1 for base access Apr 17 23:32:58.868365 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:32:58.868376 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:32:58.868386 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:32:58.868395 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:32:58.868404 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:32:58.868412 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:32:58.868420 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:32:58.868428 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:32:58.868437 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 17 23:32:58.868446 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:32:58.868453 kernel: ACPI: Interpreter enabled Apr 17 23:32:58.868461 kernel: ACPI: PM: (supports S0 S3 S5) Apr 17 23:32:58.868469 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:32:58.868477 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:32:58.868485 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:32:58.868493 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 17 23:32:58.868501 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:32:58.868674 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:32:58.868753 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 17 23:32:58.868848 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 17 23:32:58.868859 kernel: PCI host bridge to bus 0000:00 Apr 17 23:32:58.868933 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:32:58.868998 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:32:58.869063 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:32:58.869124 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 17 23:32:58.869186 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 17 23:32:58.869274 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 17 23:32:58.869397 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:32:58.869493 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 17 23:32:58.869590 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 17 23:32:58.869685 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 17 23:32:58.869775 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 17 23:32:58.869891 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 17 23:32:58.869982 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:32:58.870078 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 17 23:32:58.870169 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 17 23:32:58.870259 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 17 23:32:58.870394 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 17 23:32:58.870508 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 17 23:32:58.870603 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 17 23:32:58.870694 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 17 23:32:58.870785 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 17 23:32:58.870906 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 17 23:32:58.871004 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 17 23:32:58.871095 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 17 23:32:58.871178 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 17 23:32:58.871537 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 17 23:32:58.871650 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 17 23:32:58.871736 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 17 23:32:58.871858 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 17 23:32:58.871958 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 17 23:32:58.872044 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 17 23:32:58.872136 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 17 23:32:58.872219 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 17 23:32:58.872232 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:32:58.872243 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:32:58.872253 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:32:58.872263 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:32:58.872275 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 17 23:32:58.872286 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 17 23:32:58.872296 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 17 23:32:58.872342 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 17 23:32:58.872353 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 17 23:32:58.872364 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 17 23:32:58.872387 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 17 23:32:58.872430 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 17 23:32:58.872452 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 17 23:32:58.872486 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 17 23:32:58.872507 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 17 23:32:58.872540 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 17 23:32:58.872561 kernel: iommu: Default domain type: Translated Apr 17 23:32:58.872602 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:32:58.872623 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:32:58.872655 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:32:58.872697 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 17 23:32:58.872739 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 17 23:32:58.872949 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 17 23:32:58.873082 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 17 23:32:58.873178 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:32:58.873192 kernel: vgaarb: loaded Apr 17 23:32:58.873204 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 17 23:32:58.873215 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 17 23:32:58.873226 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:32:58.873237 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:32:58.873250 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:32:58.873261 kernel: pnp: PnP ACPI init Apr 17 23:32:58.873444 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 17 23:32:58.873463 kernel: pnp: PnP ACPI: found 6 devices Apr 17 23:32:58.873474 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:32:58.873484 kernel: NET: Registered PF_INET protocol family Apr 17 23:32:58.873495 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 17 23:32:58.873506 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 17 23:32:58.873520 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:32:58.873531 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:32:58.873541 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 17 23:32:58.873551 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 17 23:32:58.873563 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:32:58.873573 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 17 23:32:58.873584 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:32:58.873595 kernel: NET: Registered PF_XDP protocol family Apr 17 23:32:58.873679 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:32:58.873761 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:32:58.873877 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:32:58.874050 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 17 23:32:58.874167 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 17 23:32:58.874248 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 17 23:32:58.874262 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:32:58.874272 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:32:58.874282 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 17 23:32:58.874293 kernel: Initialise system trusted keyrings Apr 17 23:32:58.874348 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 17 23:32:58.874358 kernel: Key type asymmetric registered Apr 17 23:32:58.874369 kernel: Asymmetric key parser 'x509' registered Apr 17 23:32:58.874379 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:32:58.874389 kernel: io scheduler mq-deadline registered Apr 17 23:32:58.874399 kernel: io scheduler kyber registered Apr 17 23:32:58.874410 kernel: io scheduler bfq registered Apr 17 23:32:58.874420 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:32:58.874430 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 17 23:32:58.874444 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 17 23:32:58.874455 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 17 23:32:58.874465 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:32:58.874475 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:32:58.874485 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:32:58.874495 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:32:58.874504 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:32:58.874612 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 17 23:32:58.874629 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:32:58.874707 kernel: rtc_cmos 00:04: registered as rtc0 Apr 17 23:32:58.874863 kernel: rtc_cmos 00:04: setting system clock to 2026-04-17T23:32:58 UTC (1776468778) Apr 17 23:32:58.874966 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 17 23:32:58.874980 kernel: intel_pstate: CPU model not supported Apr 17 23:32:58.874991 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:32:58.875001 kernel: Segment Routing with IPv6 Apr 17 23:32:58.875012 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:32:58.875025 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:32:58.875036 kernel: Key type dns_resolver registered Apr 17 23:32:58.875047 kernel: IPI shorthand broadcast: enabled Apr 17 23:32:58.875058 kernel: sched_clock: Marking stable (890011931, 383533388)->(1389288849, -115743530) Apr 17 23:32:58.875068 kernel: registered taskstats version 1 Apr 17 23:32:58.875079 kernel: Loading compiled-in X.509 certificates Apr 17 23:32:58.875090 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:32:58.875100 kernel: Key type .fscrypt registered Apr 17 23:32:58.875109 kernel: Key type fscrypt-provisioning registered Apr 17 23:32:58.875119 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:32:58.875131 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:32:58.875141 kernel: ima: No architecture policies found Apr 17 23:32:58.875151 kernel: clk: Disabling unused clocks Apr 17 23:32:58.875161 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:32:58.875170 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:32:58.875180 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:32:58.875190 kernel: Run /init as init process Apr 17 23:32:58.875200 kernel: with arguments: Apr 17 23:32:58.875210 kernel: /init Apr 17 23:32:58.875221 kernel: with environment: Apr 17 23:32:58.875230 kernel: HOME=/ Apr 17 23:32:58.875240 kernel: TERM=linux Apr 17 23:32:58.875252 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:32:58.875264 systemd[1]: Detected virtualization kvm. Apr 17 23:32:58.875275 systemd[1]: Detected architecture x86-64. Apr 17 23:32:58.875284 systemd[1]: Running in initrd. Apr 17 23:32:58.875296 systemd[1]: No hostname configured, using default hostname. Apr 17 23:32:58.875381 systemd[1]: Hostname set to . Apr 17 23:32:58.875394 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:32:58.875405 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:32:58.875416 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:32:58.875427 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:32:58.875439 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:32:58.875450 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:32:58.875462 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:32:58.875473 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:32:58.875497 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:32:58.875508 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:32:58.875518 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:32:58.875531 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:32:58.875543 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:32:58.875554 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:32:58.875565 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:32:58.875575 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:32:58.875586 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:32:58.875598 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:32:58.875608 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:32:58.875619 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:32:58.875632 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:32:58.875642 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:32:58.875653 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:32:58.875664 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:32:58.875676 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:32:58.875687 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:32:58.875699 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:32:58.875710 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:32:58.875722 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:32:58.875732 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:32:58.875742 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:32:58.875751 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:32:58.875762 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:32:58.875865 systemd-journald[193]: Collecting audit messages is disabled. Apr 17 23:32:58.875898 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:32:58.875915 systemd-journald[193]: Journal started Apr 17 23:32:58.875940 systemd-journald[193]: Runtime Journal (/run/log/journal/bb9d80be56584601b1df2ef01e539c1d) is 6.0M, max 48.4M, 42.3M free. Apr 17 23:32:58.873029 systemd-modules-load[194]: Inserted module 'overlay' Apr 17 23:32:58.884419 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:32:58.884461 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:32:58.898340 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:32:58.900336 kernel: Bridge firewalling registered Apr 17 23:32:58.900371 systemd-modules-load[194]: Inserted module 'br_netfilter' Apr 17 23:32:58.975095 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:32:58.975899 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:32:58.979995 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:32:59.006448 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:32:59.007880 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:32:59.008857 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:32:59.018288 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:32:59.026909 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:32:59.028030 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:32:59.035873 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:32:59.052482 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:32:59.054023 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:32:59.061545 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:32:59.065374 dracut-cmdline[227]: dracut-dracut-053 Apr 17 23:32:59.069173 dracut-cmdline[227]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:32:59.090803 systemd-resolved[235]: Positive Trust Anchors: Apr 17 23:32:59.090837 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:32:59.090870 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:32:59.093634 systemd-resolved[235]: Defaulting to hostname 'linux'. Apr 17 23:32:59.094592 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:32:59.096983 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:32:59.171442 kernel: SCSI subsystem initialized Apr 17 23:32:59.181389 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:32:59.193964 kernel: iscsi: registered transport (tcp) Apr 17 23:32:59.214008 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:32:59.214079 kernel: QLogic iSCSI HBA Driver Apr 17 23:32:59.253210 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:32:59.264478 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:32:59.287426 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:32:59.287490 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:32:59.289480 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:32:59.335450 kernel: raid6: avx512x4 gen() 34766 MB/s Apr 17 23:32:59.353435 kernel: raid6: avx512x2 gen() 33833 MB/s Apr 17 23:32:59.370464 kernel: raid6: avx512x1 gen() 34283 MB/s Apr 17 23:32:59.387484 kernel: raid6: avx2x4 gen() 36004 MB/s Apr 17 23:32:59.404456 kernel: raid6: avx2x2 gen() 35515 MB/s Apr 17 23:32:59.422158 kernel: raid6: avx2x1 gen() 26695 MB/s Apr 17 23:32:59.422256 kernel: raid6: using algorithm avx2x4 gen() 36004 MB/s Apr 17 23:32:59.440248 kernel: raid6: .... xor() 10243 MB/s, rmw enabled Apr 17 23:32:59.440411 kernel: raid6: using avx512x2 recovery algorithm Apr 17 23:32:59.458466 kernel: xor: automatically using best checksumming function avx Apr 17 23:32:59.597479 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:32:59.611413 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:32:59.631950 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:32:59.641688 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 17 23:32:59.644338 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:32:59.658615 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:32:59.670097 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Apr 17 23:32:59.694242 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:32:59.703538 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:32:59.737124 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:32:59.744516 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:32:59.755900 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:32:59.761740 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:32:59.767601 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:32:59.777548 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 17 23:32:59.777673 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:32:59.769722 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:32:59.787240 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 17 23:32:59.785246 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:32:59.792775 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:32:59.794534 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:32:59.814111 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:32:59.814132 kernel: GPT:9289727 != 19775487 Apr 17 23:32:59.814139 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:32:59.814146 kernel: GPT:9289727 != 19775487 Apr 17 23:32:59.814153 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:32:59.814160 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:32:59.814166 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:32:59.814173 kernel: libata version 3.00 loaded. Apr 17 23:32:59.814181 kernel: AES CTR mode by8 optimization enabled Apr 17 23:32:59.797431 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:32:59.806078 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:32:59.806445 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:32:59.810188 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:32:59.823752 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:32:59.831172 kernel: ahci 0000:00:1f.2: version 3.0 Apr 17 23:32:59.831394 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 17 23:32:59.831409 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 17 23:32:59.831628 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 17 23:32:59.825588 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:32:59.840946 kernel: scsi host0: ahci Apr 17 23:32:59.841082 kernel: scsi host1: ahci Apr 17 23:32:59.841209 kernel: scsi host2: ahci Apr 17 23:32:59.841336 kernel: scsi host3: ahci Apr 17 23:32:59.841418 kernel: scsi host4: ahci Apr 17 23:32:59.841487 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (469) Apr 17 23:32:59.843395 kernel: scsi host5: ahci Apr 17 23:32:59.845413 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 17 23:32:59.845734 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 17 23:32:59.845745 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/vda3 scanned by (udev-worker) (463) Apr 17 23:32:59.848393 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 17 23:32:59.848425 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 17 23:32:59.852441 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 17 23:32:59.852468 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 17 23:32:59.858240 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 17 23:32:59.872024 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 17 23:32:59.954971 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:32:59.959813 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 17 23:32:59.963653 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 17 23:32:59.969070 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:32:59.986752 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:32:59.991851 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:32:59.998741 disk-uuid[562]: Primary Header is updated. Apr 17 23:32:59.998741 disk-uuid[562]: Secondary Entries is updated. Apr 17 23:32:59.998741 disk-uuid[562]: Secondary Header is updated. Apr 17 23:33:00.003987 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:33:00.018713 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:33:00.173190 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 17 23:33:00.173265 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 17 23:33:00.173278 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 17 23:33:00.173288 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 17 23:33:00.173298 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 17 23:33:00.175110 kernel: ata3.00: applying bridge limits Apr 17 23:33:00.175358 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 17 23:33:00.178488 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 17 23:33:00.178525 kernel: ata3.00: configured for UDMA/100 Apr 17 23:33:00.182413 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 17 23:33:00.230634 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 17 23:33:00.230982 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 17 23:33:00.243448 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 17 23:33:01.017994 disk-uuid[564]: The operation has completed successfully. Apr 17 23:33:01.019819 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 17 23:33:01.042348 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:33:01.042440 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:33:01.075792 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:33:01.079199 sh[597]: Success Apr 17 23:33:01.092419 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 17 23:33:01.122436 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:33:01.138576 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:33:01.144242 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:33:01.155418 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:33:01.155497 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:33:01.155506 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:33:01.156998 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:33:01.158145 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:33:01.165579 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:33:01.168023 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:33:01.180556 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:33:01.184615 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:33:01.193356 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:33:01.193389 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:33:01.193403 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:33:01.198365 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:33:01.206192 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:33:01.209013 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:33:01.213879 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:33:01.219480 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:33:01.276925 ignition[682]: Ignition 2.19.0 Apr 17 23:33:01.276946 ignition[682]: Stage: fetch-offline Apr 17 23:33:01.276982 ignition[682]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:33:01.276991 ignition[682]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:33:01.277091 ignition[682]: parsed url from cmdline: "" Apr 17 23:33:01.277094 ignition[682]: no config URL provided Apr 17 23:33:01.277098 ignition[682]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:33:01.277102 ignition[682]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:33:01.277122 ignition[682]: op(1): [started] loading QEMU firmware config module Apr 17 23:33:01.277126 ignition[682]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 17 23:33:01.285490 ignition[682]: op(1): [finished] loading QEMU firmware config module Apr 17 23:33:01.285504 ignition[682]: QEMU firmware config was not found. Ignoring... Apr 17 23:33:01.298933 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:33:01.314508 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:33:01.334269 systemd-networkd[786]: lo: Link UP Apr 17 23:33:01.334292 systemd-networkd[786]: lo: Gained carrier Apr 17 23:33:01.335153 systemd-networkd[786]: Enumeration completed Apr 17 23:33:01.335632 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:33:01.335634 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:33:01.337096 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:33:01.338008 systemd-networkd[786]: eth0: Link UP Apr 17 23:33:01.338011 systemd-networkd[786]: eth0: Gained carrier Apr 17 23:33:01.338018 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:33:01.339125 systemd[1]: Reached target network.target - Network. Apr 17 23:33:01.361399 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 23:33:01.411411 ignition[682]: parsing config with SHA512: 35af654d78fc06d33382b896cc8add871fb39c032f023d78d83bc7f60d0f1a26a240467180b7799ac5f91a460170c6e3eb12ed99bd85bbf3744dcbbc10f75a03 Apr 17 23:33:01.416126 unknown[682]: fetched base config from "system" Apr 17 23:33:01.416144 unknown[682]: fetched user config from "qemu" Apr 17 23:33:01.416630 ignition[682]: fetch-offline: fetch-offline passed Apr 17 23:33:01.416701 ignition[682]: Ignition finished successfully Apr 17 23:33:01.422279 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:33:01.426047 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 17 23:33:01.445635 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:33:01.459544 ignition[790]: Ignition 2.19.0 Apr 17 23:33:01.459560 ignition[790]: Stage: kargs Apr 17 23:33:01.459732 ignition[790]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:33:01.459739 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:33:01.460409 ignition[790]: kargs: kargs passed Apr 17 23:33:01.460441 ignition[790]: Ignition finished successfully Apr 17 23:33:01.470140 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:33:01.483622 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:33:01.497623 ignition[798]: Ignition 2.19.0 Apr 17 23:33:01.497647 ignition[798]: Stage: disks Apr 17 23:33:01.497795 ignition[798]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:33:01.497806 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:33:01.498515 ignition[798]: disks: disks passed Apr 17 23:33:01.498551 ignition[798]: Ignition finished successfully Apr 17 23:33:01.505457 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:33:01.509690 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:33:01.513882 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:33:01.514800 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:33:01.519600 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:33:01.522886 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:33:01.539616 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:33:01.554178 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:33:01.558156 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:33:01.564642 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:33:01.654415 kernel: EXT4-fs (vda9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:33:01.655782 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:33:01.659614 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:33:01.682693 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:33:01.686246 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:33:01.690424 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Apr 17 23:33:01.687116 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:33:01.695112 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:33:01.695126 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:33:01.695134 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:33:01.687159 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:33:01.703065 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:33:01.687183 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:33:01.701021 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:33:01.736168 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:33:01.737798 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:33:01.771934 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:33:01.776651 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:33:01.781300 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:33:01.785484 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:33:01.863206 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:33:01.875446 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:33:01.879810 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:33:01.885350 kernel: BTRFS info (device vda6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:33:01.903270 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:33:01.906537 ignition[929]: INFO : Ignition 2.19.0 Apr 17 23:33:01.906537 ignition[929]: INFO : Stage: mount Apr 17 23:33:01.910429 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:33:01.910429 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:33:01.910429 ignition[929]: INFO : mount: mount passed Apr 17 23:33:01.910429 ignition[929]: INFO : Ignition finished successfully Apr 17 23:33:01.908497 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:33:01.916424 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:33:02.153661 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:33:02.166725 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:33:02.176642 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Apr 17 23:33:02.176685 kernel: BTRFS info (device vda6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:33:02.176694 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:33:02.177969 kernel: BTRFS info (device vda6): using free space tree Apr 17 23:33:02.182343 kernel: BTRFS info (device vda6): auto enabling async discard Apr 17 23:33:02.183371 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:33:02.199336 ignition[960]: INFO : Ignition 2.19.0 Apr 17 23:33:02.199336 ignition[960]: INFO : Stage: files Apr 17 23:33:02.201978 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:33:02.201978 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:33:02.201978 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:33:02.201978 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:33:02.201978 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:33:02.211237 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:33:02.211237 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:33:02.211237 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:33:02.211237 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:33:02.211237 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:33:02.203201 unknown[960]: wrote ssh authorized keys file for user: core Apr 17 23:33:02.300554 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:33:02.437532 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:33:02.437532 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:33:02.443265 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:33:02.443265 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:33:02.443265 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:33:02.443265 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:33:02.443265 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:33:02.443265 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:33:02.443265 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:33:02.443265 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:33:02.443265 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:33:02.443265 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:33:02.443265 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:33:02.443265 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:33:02.443265 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 17 23:33:02.561245 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:33:02.896553 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:33:02.896553 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:33:02.903732 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:33:02.903732 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:33:02.903732 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:33:02.903732 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Apr 17 23:33:02.903732 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 23:33:02.903732 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 17 23:33:02.903732 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Apr 17 23:33:02.903732 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Apr 17 23:33:02.928796 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 23:33:02.931195 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 17 23:33:02.931195 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Apr 17 23:33:02.931195 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:33:02.931195 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:33:02.931195 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:33:02.931195 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:33:02.931195 ignition[960]: INFO : files: files passed Apr 17 23:33:02.931195 ignition[960]: INFO : Ignition finished successfully Apr 17 23:33:02.932225 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:33:02.949640 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:33:02.954578 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:33:02.965643 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Apr 17 23:33:02.957034 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:33:02.973386 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:33:02.973386 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:33:02.957118 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:33:02.980009 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:33:02.967679 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:33:02.970576 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:33:02.983535 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:33:03.005124 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:33:03.005263 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:33:03.012989 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:33:03.016347 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:33:03.017101 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:33:03.021700 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:33:03.041164 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:33:03.051542 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:33:03.060557 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:33:03.062660 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:33:03.066361 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:33:03.069408 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:33:03.069592 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:33:03.072746 systemd-networkd[786]: eth0: Gained IPv6LL Apr 17 23:33:03.073222 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:33:03.075061 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:33:03.080180 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:33:03.080963 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:33:03.084728 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:33:03.087819 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:33:03.090962 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:33:03.093949 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:33:03.097017 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:33:03.100163 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:33:03.102964 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:33:03.103124 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:33:03.107575 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:33:03.108494 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:33:03.112669 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:33:03.112788 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:33:03.115748 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:33:03.115916 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:33:03.121482 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:33:03.121617 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:33:03.125094 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:33:03.127521 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:33:03.133580 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:33:03.134412 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:33:03.139482 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:33:03.143886 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:33:03.143984 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:33:03.146522 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:33:03.146579 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:33:03.147394 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:33:03.147541 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:33:03.151126 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:33:03.151245 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:33:03.166516 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:33:03.169410 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:33:03.170010 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:33:03.170090 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:33:03.172954 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:33:03.173121 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:33:03.184019 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:33:03.184103 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:33:03.190708 ignition[1014]: INFO : Ignition 2.19.0 Apr 17 23:33:03.190708 ignition[1014]: INFO : Stage: umount Apr 17 23:33:03.193564 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:33:03.193564 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 17 23:33:03.191820 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:33:03.200256 ignition[1014]: INFO : umount: umount passed Apr 17 23:33:03.200256 ignition[1014]: INFO : Ignition finished successfully Apr 17 23:33:03.200691 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:33:03.200796 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:33:03.204536 systemd[1]: Stopped target network.target - Network. Apr 17 23:33:03.206044 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:33:03.206134 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:33:03.211559 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:33:03.211666 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:33:03.212294 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:33:03.212394 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:33:03.218733 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:33:03.218872 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:33:03.219498 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:33:03.225459 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:33:03.231862 systemd-networkd[786]: eth0: DHCPv6 lease lost Apr 17 23:33:03.234864 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:33:03.234958 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:33:03.238156 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:33:03.238196 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:33:03.250770 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:33:03.251808 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:33:03.251891 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:33:03.257386 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:33:03.263440 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:33:03.263592 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:33:03.268504 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:33:03.268674 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:33:03.274904 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:33:03.276217 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:33:03.280427 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:33:03.280478 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:33:03.282502 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:33:03.282542 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:33:03.286147 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:33:03.286213 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:33:03.291190 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:33:03.291294 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:33:03.295429 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:33:03.295530 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:33:03.300211 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:33:03.300357 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:33:03.313553 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:33:03.317010 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:33:03.317366 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:33:03.318430 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:33:03.318495 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:33:03.322828 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:33:03.322893 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:33:03.326016 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 23:33:03.326108 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:33:03.331232 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:33:03.331276 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:33:03.342415 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:33:03.342521 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:33:03.349184 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:33:03.349830 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:33:03.352291 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:33:03.352415 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:33:03.355481 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:33:03.355557 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:33:03.359237 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:33:03.376743 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:33:03.383975 systemd[1]: Switching root. Apr 17 23:33:03.407223 systemd-journald[193]: Journal stopped Apr 17 23:33:04.139115 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Apr 17 23:33:04.139166 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:33:04.139180 kernel: SELinux: policy capability open_perms=1 Apr 17 23:33:04.139188 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:33:04.139194 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:33:04.139204 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:33:04.139212 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:33:04.139219 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:33:04.139226 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:33:04.139234 kernel: audit: type=1403 audit(1776468783.518:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:33:04.139243 systemd[1]: Successfully loaded SELinux policy in 33.839ms. Apr 17 23:33:04.139258 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.129ms. Apr 17 23:33:04.139267 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:33:04.139277 systemd[1]: Detected virtualization kvm. Apr 17 23:33:04.139285 systemd[1]: Detected architecture x86-64. Apr 17 23:33:04.139294 systemd[1]: Detected first boot. Apr 17 23:33:04.139302 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:33:04.139715 zram_generator::config[1059]: No configuration found. Apr 17 23:33:04.139727 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:33:04.139736 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:33:04.139782 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:33:04.139791 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:33:04.139802 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:33:04.139810 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:33:04.139818 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:33:04.139826 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:33:04.139833 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:33:04.139860 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:33:04.139869 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:33:04.139878 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:33:04.139888 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:33:04.139896 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:33:04.139904 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:33:04.139912 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:33:04.139920 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:33:04.139928 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:33:04.139935 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:33:04.139943 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:33:04.139951 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:33:04.139961 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:33:04.139969 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:33:04.139977 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:33:04.139985 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:33:04.139996 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:33:04.140004 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:33:04.140012 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:33:04.140019 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:33:04.140031 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:33:04.140039 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:33:04.140047 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:33:04.140055 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:33:04.140063 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:33:04.140071 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:33:04.140079 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:33:04.140087 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:33:04.140094 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:33:04.140104 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:33:04.140112 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:33:04.140120 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:33:04.140127 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:33:04.140135 systemd[1]: Reached target machines.target - Containers. Apr 17 23:33:04.140143 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:33:04.140153 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:33:04.140160 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:33:04.140169 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:33:04.140176 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:33:04.140184 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:33:04.140192 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:33:04.140199 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:33:04.140207 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:33:04.140215 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:33:04.140224 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:33:04.140232 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:33:04.140241 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:33:04.140249 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:33:04.140257 kernel: loop: module loaded Apr 17 23:33:04.140264 kernel: fuse: init (API version 7.39) Apr 17 23:33:04.140272 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:33:04.140279 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:33:04.140287 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:33:04.140295 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:33:04.140345 systemd-journald[1136]: Collecting audit messages is disabled. Apr 17 23:33:04.140365 systemd-journald[1136]: Journal started Apr 17 23:33:04.140382 systemd-journald[1136]: Runtime Journal (/run/log/journal/bb9d80be56584601b1df2ef01e539c1d) is 6.0M, max 48.4M, 42.3M free. Apr 17 23:33:03.847940 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:33:03.872646 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 17 23:33:03.873045 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:33:04.144490 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:33:04.148404 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:33:04.148461 systemd[1]: Stopped verity-setup.service. Apr 17 23:33:04.154452 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:33:04.156344 kernel: ACPI: bus type drm_connector registered Apr 17 23:33:04.156364 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:33:04.159578 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:33:04.161448 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:33:04.163263 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:33:04.164839 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:33:04.166593 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:33:04.168396 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:33:04.170267 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:33:04.172449 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:33:04.174717 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:33:04.174923 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:33:04.176794 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:33:04.176937 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:33:04.178888 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:33:04.179011 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:33:04.180824 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:33:04.180977 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:33:04.182996 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:33:04.183148 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:33:04.184981 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:33:04.185109 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:33:04.186954 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:33:04.188773 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:33:04.190827 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:33:04.193928 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:33:04.203349 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:33:04.212753 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:33:04.215750 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:33:04.217541 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:33:04.217594 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:33:04.220870 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:33:04.223875 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:33:04.226435 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:33:04.228028 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:33:04.230228 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:33:04.232862 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:33:04.234662 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:33:04.238084 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:33:04.239875 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:33:04.241079 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:33:04.243638 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:33:04.249518 systemd-journald[1136]: Time spent on flushing to /var/log/journal/bb9d80be56584601b1df2ef01e539c1d is 14.514ms for 952 entries. Apr 17 23:33:04.249518 systemd-journald[1136]: System Journal (/var/log/journal/bb9d80be56584601b1df2ef01e539c1d) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:33:04.282536 systemd-journald[1136]: Received client request to flush runtime journal. Apr 17 23:33:04.247173 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:33:04.251866 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:33:04.254660 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:33:04.257659 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:33:04.259706 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:33:04.280748 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:33:04.285428 kernel: loop0: detected capacity change from 0 to 142488 Apr 17 23:33:04.286380 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:33:04.289298 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:33:04.295108 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Apr 17 23:33:04.295120 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Apr 17 23:33:04.300355 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:33:04.303143 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:33:04.306116 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:33:04.311482 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 17 23:33:04.318460 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:33:04.322505 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:33:04.325189 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:33:04.325957 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:33:04.343451 kernel: loop1: detected capacity change from 0 to 228704 Apr 17 23:33:04.344427 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:33:04.357000 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:33:04.377610 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Apr 17 23:33:04.377650 systemd-tmpfiles[1195]: ACLs are not supported, ignoring. Apr 17 23:33:04.381768 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:33:04.388358 kernel: loop2: detected capacity change from 0 to 140768 Apr 17 23:33:04.424443 kernel: loop3: detected capacity change from 0 to 142488 Apr 17 23:33:04.437367 kernel: loop4: detected capacity change from 0 to 228704 Apr 17 23:33:04.447357 kernel: loop5: detected capacity change from 0 to 140768 Apr 17 23:33:04.456208 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 17 23:33:04.456571 (sd-merge)[1200]: Merged extensions into '/usr'. Apr 17 23:33:04.459526 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:33:04.459541 systemd[1]: Reloading... Apr 17 23:33:04.503345 zram_generator::config[1224]: No configuration found. Apr 17 23:33:04.574800 ldconfig[1169]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:33:04.587949 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:33:04.619815 systemd[1]: Reloading finished in 159 ms. Apr 17 23:33:04.658300 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:33:04.660654 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:33:04.662685 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:33:04.679553 systemd[1]: Starting ensure-sysext.service... Apr 17 23:33:04.682159 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:33:04.684874 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:33:04.690068 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:33:04.690093 systemd[1]: Reloading... Apr 17 23:33:04.696890 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:33:04.697110 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:33:04.697703 systemd-tmpfiles[1266]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:33:04.697898 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Apr 17 23:33:04.697949 systemd-tmpfiles[1266]: ACLs are not supported, ignoring. Apr 17 23:33:04.699717 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:33:04.699724 systemd-tmpfiles[1266]: Skipping /boot Apr 17 23:33:04.705609 systemd-tmpfiles[1266]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:33:04.705634 systemd-tmpfiles[1266]: Skipping /boot Apr 17 23:33:04.706296 systemd-udevd[1267]: Using default interface naming scheme 'v255'. Apr 17 23:33:04.725695 zram_generator::config[1292]: No configuration found. Apr 17 23:33:04.760433 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1301) Apr 17 23:33:04.791362 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 17 23:33:04.797336 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:33:04.817488 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 17 23:33:04.817684 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 17 23:33:04.817697 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 17 23:33:04.817790 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 17 23:33:04.829565 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:33:04.868073 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 23:33:04.868129 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 17 23:33:04.872797 systemd[1]: Reloading finished in 182 ms. Apr 17 23:33:04.896352 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:33:04.905897 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:33:04.923284 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:33:04.951391 systemd[1]: Finished ensure-sysext.service. Apr 17 23:33:04.957741 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:33:04.967615 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:33:04.985499 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:33:04.988695 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:33:04.990711 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:33:04.991706 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:33:04.994396 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:33:04.999459 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:33:05.002563 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:33:05.005931 lvm[1370]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:33:05.008106 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:33:05.010210 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:33:05.012149 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:33:05.016683 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:33:05.019960 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:33:05.023195 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:33:05.025560 augenrules[1387]: No rules Apr 17 23:33:05.028041 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 17 23:33:05.031794 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:33:05.034405 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:33:05.036348 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:33:05.037007 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:33:05.040007 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:33:05.042589 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:33:05.042749 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:33:05.044941 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:33:05.045053 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:33:05.047163 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:33:05.049397 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:33:05.049506 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:33:05.051788 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:33:05.051925 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:33:05.054234 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:33:05.057114 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:33:05.066632 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:33:05.070221 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:33:05.083626 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:33:05.088579 lvm[1411]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:33:05.174206 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:33:05.174488 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:33:05.183523 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:33:05.186402 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:33:05.187908 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:33:05.188228 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:33:05.190173 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:33:05.198114 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:33:05.213598 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:33:05.263070 systemd-networkd[1388]: lo: Link UP Apr 17 23:33:05.263089 systemd-networkd[1388]: lo: Gained carrier Apr 17 23:33:05.263940 systemd-networkd[1388]: Enumeration completed Apr 17 23:33:05.264045 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:33:05.264402 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:33:05.264420 systemd-networkd[1388]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:33:05.265078 systemd-networkd[1388]: eth0: Link UP Apr 17 23:33:05.265093 systemd-networkd[1388]: eth0: Gained carrier Apr 17 23:33:05.265102 systemd-networkd[1388]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:33:05.270935 systemd-resolved[1391]: Positive Trust Anchors: Apr 17 23:33:05.270956 systemd-resolved[1391]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:33:05.270981 systemd-resolved[1391]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:33:05.273464 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:33:05.275715 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 17 23:33:05.277740 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:33:05.279356 systemd-resolved[1391]: Defaulting to hostname 'linux'. Apr 17 23:33:05.280411 systemd-networkd[1388]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 17 23:33:05.280898 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:33:05.280938 systemd-timesyncd[1395]: Network configuration changed, trying to establish connection. Apr 17 23:33:05.849383 systemd-timesyncd[1395]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 17 23:33:05.849412 systemd-timesyncd[1395]: Initial clock synchronization to Fri 2026-04-17 23:33:05.849321 UTC. Apr 17 23:33:05.850837 systemd[1]: Reached target network.target - Network. Apr 17 23:33:05.852173 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:33:05.853877 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:33:05.855472 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:33:05.857264 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:33:05.857305 systemd-resolved[1391]: Clock change detected. Flushing caches. Apr 17 23:33:05.859192 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:33:05.860817 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:33:05.862656 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:33:05.864749 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:33:05.864793 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:33:05.866125 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:33:05.867972 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:33:05.870778 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:33:05.878612 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:33:05.880596 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:33:05.882581 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:33:05.884141 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:33:05.885565 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:33:05.885595 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:33:05.886753 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:33:05.889283 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:33:05.891584 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:33:05.893792 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:33:05.895284 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:33:05.897404 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:33:05.898883 jq[1431]: false Apr 17 23:33:05.900199 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:33:05.902714 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:33:05.905825 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:33:05.912155 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:33:05.915073 extend-filesystems[1432]: Found loop3 Apr 17 23:33:05.915073 extend-filesystems[1432]: Found loop4 Apr 17 23:33:05.915073 extend-filesystems[1432]: Found loop5 Apr 17 23:33:05.915073 extend-filesystems[1432]: Found sr0 Apr 17 23:33:05.915073 extend-filesystems[1432]: Found vda Apr 17 23:33:05.915073 extend-filesystems[1432]: Found vda1 Apr 17 23:33:05.915073 extend-filesystems[1432]: Found vda2 Apr 17 23:33:05.915073 extend-filesystems[1432]: Found vda3 Apr 17 23:33:05.915073 extend-filesystems[1432]: Found usr Apr 17 23:33:05.915073 extend-filesystems[1432]: Found vda4 Apr 17 23:33:05.915073 extend-filesystems[1432]: Found vda6 Apr 17 23:33:05.915073 extend-filesystems[1432]: Found vda7 Apr 17 23:33:05.915073 extend-filesystems[1432]: Found vda9 Apr 17 23:33:05.915073 extend-filesystems[1432]: Checking size of /dev/vda9 Apr 17 23:33:05.914623 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:33:05.944759 extend-filesystems[1432]: Resized partition /dev/vda9 Apr 17 23:33:05.914890 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:33:05.946516 extend-filesystems[1455]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:33:05.957100 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 17 23:33:05.918843 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:33:05.957161 jq[1450]: true Apr 17 23:33:05.947884 dbus-daemon[1430]: [system] SELinux support is enabled Apr 17 23:33:05.924163 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:33:05.957462 update_engine[1445]: I20260417 23:33:05.950986 1445 main.cc:92] Flatcar Update Engine starting Apr 17 23:33:05.957462 update_engine[1445]: I20260417 23:33:05.954185 1445 update_check_scheduler.cc:74] Next update check in 11m36s Apr 17 23:33:05.928248 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:33:05.957685 jq[1456]: true Apr 17 23:33:05.928407 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:33:05.933483 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:33:05.933630 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:33:05.941594 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:33:05.941730 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:33:05.954292 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:33:05.960050 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (1310) Apr 17 23:33:05.976440 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:33:05.982780 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 17 23:33:05.995677 tar[1454]: linux-amd64/LICENSE Apr 17 23:33:05.990860 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:33:05.996179 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:33:05.996213 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:33:05.998376 tar[1454]: linux-amd64/helm Apr 17 23:33:05.998681 extend-filesystems[1455]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 17 23:33:05.998681 extend-filesystems[1455]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 17 23:33:05.998681 extend-filesystems[1455]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 17 23:33:06.008094 extend-filesystems[1432]: Resized filesystem in /dev/vda9 Apr 17 23:33:06.011107 bash[1483]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:33:05.999126 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:33:05.999141 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:33:06.009186 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:33:06.011399 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:33:06.012057 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:33:06.015473 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:33:06.019614 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 17 23:33:06.021493 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (Power Button) Apr 17 23:33:06.021508 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:33:06.026042 sshd_keygen[1448]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:33:06.025696 systemd-logind[1440]: New seat seat0. Apr 17 23:33:06.034859 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:33:06.048703 locksmithd[1485]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:33:06.057512 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:33:06.067267 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:33:06.072815 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:33:06.072963 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:33:06.080969 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:33:06.089804 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:33:06.101265 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:33:06.104057 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:33:06.105863 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:33:06.145814 containerd[1462]: time="2026-04-17T23:33:06.145731339Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:33:06.165057 containerd[1462]: time="2026-04-17T23:33:06.164910867Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:33:06.167120 containerd[1462]: time="2026-04-17T23:33:06.167059547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:33:06.167120 containerd[1462]: time="2026-04-17T23:33:06.167111634Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:33:06.167120 containerd[1462]: time="2026-04-17T23:33:06.167128220Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:33:06.167294 containerd[1462]: time="2026-04-17T23:33:06.167256325Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:33:06.167294 containerd[1462]: time="2026-04-17T23:33:06.167287887Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:33:06.167362 containerd[1462]: time="2026-04-17T23:33:06.167329899Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:33:06.167362 containerd[1462]: time="2026-04-17T23:33:06.167355282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:33:06.167514 containerd[1462]: time="2026-04-17T23:33:06.167485868Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:33:06.167514 containerd[1462]: time="2026-04-17T23:33:06.167508809Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:33:06.167571 containerd[1462]: time="2026-04-17T23:33:06.167518936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:33:06.167571 containerd[1462]: time="2026-04-17T23:33:06.167525812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:33:06.167622 containerd[1462]: time="2026-04-17T23:33:06.167599485Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:33:06.167785 containerd[1462]: time="2026-04-17T23:33:06.167755554Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:33:06.167900 containerd[1462]: time="2026-04-17T23:33:06.167866532Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:33:06.167900 containerd[1462]: time="2026-04-17T23:33:06.167889414Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:33:06.168033 containerd[1462]: time="2026-04-17T23:33:06.167971942Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:33:06.168139 containerd[1462]: time="2026-04-17T23:33:06.168115913Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:33:06.174387 containerd[1462]: time="2026-04-17T23:33:06.174346502Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:33:06.174451 containerd[1462]: time="2026-04-17T23:33:06.174399241Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:33:06.174451 containerd[1462]: time="2026-04-17T23:33:06.174412316Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:33:06.174451 containerd[1462]: time="2026-04-17T23:33:06.174427485Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:33:06.174451 containerd[1462]: time="2026-04-17T23:33:06.174438484Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:33:06.174605 containerd[1462]: time="2026-04-17T23:33:06.174574096Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:33:06.174813 containerd[1462]: time="2026-04-17T23:33:06.174784879Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:33:06.174921 containerd[1462]: time="2026-04-17T23:33:06.174871797Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:33:06.174921 containerd[1462]: time="2026-04-17T23:33:06.174885121Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:33:06.174921 containerd[1462]: time="2026-04-17T23:33:06.174895521Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:33:06.174921 containerd[1462]: time="2026-04-17T23:33:06.174905809Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:33:06.174921 containerd[1462]: time="2026-04-17T23:33:06.174919607Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:33:06.174981 containerd[1462]: time="2026-04-17T23:33:06.174929955Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:33:06.174981 containerd[1462]: time="2026-04-17T23:33:06.174940613Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:33:06.174981 containerd[1462]: time="2026-04-17T23:33:06.174950437Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:33:06.174981 containerd[1462]: time="2026-04-17T23:33:06.174959649Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:33:06.174981 containerd[1462]: time="2026-04-17T23:33:06.174968053Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:33:06.174981 containerd[1462]: time="2026-04-17T23:33:06.174976961Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:33:06.175083 containerd[1462]: time="2026-04-17T23:33:06.174990502Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:33:06.175083 containerd[1462]: time="2026-04-17T23:33:06.175036480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:33:06.175083 containerd[1462]: time="2026-04-17T23:33:06.175050357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:33:06.175083 containerd[1462]: time="2026-04-17T23:33:06.175059928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:33:06.175083 containerd[1462]: time="2026-04-17T23:33:06.175068644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:33:06.175083 containerd[1462]: time="2026-04-17T23:33:06.175077374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:33:06.175162 containerd[1462]: time="2026-04-17T23:33:06.175085944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:33:06.175162 containerd[1462]: time="2026-04-17T23:33:06.175098537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:33:06.175162 containerd[1462]: time="2026-04-17T23:33:06.175107469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:33:06.175162 containerd[1462]: time="2026-04-17T23:33:06.175118720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:33:06.175162 containerd[1462]: time="2026-04-17T23:33:06.175127084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:33:06.175162 containerd[1462]: time="2026-04-17T23:33:06.175134895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:33:06.175162 containerd[1462]: time="2026-04-17T23:33:06.175144636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:33:06.175162 containerd[1462]: time="2026-04-17T23:33:06.175158870Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:33:06.175259 containerd[1462]: time="2026-04-17T23:33:06.175174488Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:33:06.175259 containerd[1462]: time="2026-04-17T23:33:06.175183187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:33:06.175259 containerd[1462]: time="2026-04-17T23:33:06.175190313Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:33:06.175259 containerd[1462]: time="2026-04-17T23:33:06.175234414Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:33:06.175259 containerd[1462]: time="2026-04-17T23:33:06.175246840Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:33:06.175259 containerd[1462]: time="2026-04-17T23:33:06.175254121Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:33:06.175333 containerd[1462]: time="2026-04-17T23:33:06.175262864Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:33:06.175333 containerd[1462]: time="2026-04-17T23:33:06.175270359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:33:06.175333 containerd[1462]: time="2026-04-17T23:33:06.175283596Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:33:06.175333 containerd[1462]: time="2026-04-17T23:33:06.175290979Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:33:06.175333 containerd[1462]: time="2026-04-17T23:33:06.175298012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:33:06.175966 containerd[1462]: time="2026-04-17T23:33:06.175509180Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:33:06.175966 containerd[1462]: time="2026-04-17T23:33:06.175573423Z" level=info msg="Connect containerd service" Apr 17 23:33:06.175966 containerd[1462]: time="2026-04-17T23:33:06.175605143Z" level=info msg="using legacy CRI server" Apr 17 23:33:06.175966 containerd[1462]: time="2026-04-17T23:33:06.175610636Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:33:06.175966 containerd[1462]: time="2026-04-17T23:33:06.175674697Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:33:06.176227 containerd[1462]: time="2026-04-17T23:33:06.176131489Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:33:06.176430 containerd[1462]: time="2026-04-17T23:33:06.176294898Z" level=info msg="Start subscribing containerd event" Apr 17 23:33:06.176430 containerd[1462]: time="2026-04-17T23:33:06.176376574Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:33:06.176469 containerd[1462]: time="2026-04-17T23:33:06.176439840Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:33:06.176666 containerd[1462]: time="2026-04-17T23:33:06.176380089Z" level=info msg="Start recovering state" Apr 17 23:33:06.177653 containerd[1462]: time="2026-04-17T23:33:06.176737087Z" level=info msg="Start event monitor" Apr 17 23:33:06.177653 containerd[1462]: time="2026-04-17T23:33:06.176754977Z" level=info msg="Start snapshots syncer" Apr 17 23:33:06.177653 containerd[1462]: time="2026-04-17T23:33:06.176778968Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:33:06.177653 containerd[1462]: time="2026-04-17T23:33:06.176788554Z" level=info msg="Start streaming server" Apr 17 23:33:06.176924 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:33:06.179074 containerd[1462]: time="2026-04-17T23:33:06.179056880Z" level=info msg="containerd successfully booted in 0.035019s" Apr 17 23:33:06.388918 tar[1454]: linux-amd64/README.md Apr 17 23:33:06.401173 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:33:07.352584 systemd-networkd[1388]: eth0: Gained IPv6LL Apr 17 23:33:07.355311 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:33:07.357694 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:33:07.374266 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 17 23:33:07.377361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:33:07.380191 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:33:07.394147 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 17 23:33:07.394297 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 17 23:33:07.396493 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:33:07.401330 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:33:08.045184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:33:08.047270 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:33:08.049142 (kubelet)[1542]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:33:08.050872 systemd[1]: Startup finished in 1.011s (kernel) + 4.842s (initrd) + 3.995s (userspace) = 9.849s. Apr 17 23:33:08.469641 kubelet[1542]: E0417 23:33:08.469441 1542 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:33:08.471850 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:33:08.471976 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:33:12.090231 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:33:12.091270 systemd[1]: Started sshd@0-10.0.0.43:22-10.0.0.1:36952.service - OpenSSH per-connection server daemon (10.0.0.1:36952). Apr 17 23:33:12.153974 sshd[1556]: Accepted publickey for core from 10.0.0.1 port 36952 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:33:12.157516 sshd[1556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:33:12.164838 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:33:12.174483 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:33:12.176149 systemd-logind[1440]: New session 1 of user core. Apr 17 23:33:12.184538 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:33:12.186663 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:33:12.193431 (systemd)[1560]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:33:12.266308 systemd[1560]: Queued start job for default target default.target. Apr 17 23:33:12.276370 systemd[1560]: Created slice app.slice - User Application Slice. Apr 17 23:33:12.276424 systemd[1560]: Reached target paths.target - Paths. Apr 17 23:33:12.276437 systemd[1560]: Reached target timers.target - Timers. Apr 17 23:33:12.277697 systemd[1560]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:33:12.286967 systemd[1560]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:33:12.287138 systemd[1560]: Reached target sockets.target - Sockets. Apr 17 23:33:12.287172 systemd[1560]: Reached target basic.target - Basic System. Apr 17 23:33:12.287208 systemd[1560]: Reached target default.target - Main User Target. Apr 17 23:33:12.287235 systemd[1560]: Startup finished in 87ms. Apr 17 23:33:12.287451 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:33:12.288755 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:33:12.357986 systemd[1]: Started sshd@1-10.0.0.43:22-10.0.0.1:36962.service - OpenSSH per-connection server daemon (10.0.0.1:36962). Apr 17 23:33:12.383689 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 36962 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:33:12.384779 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:33:12.388211 systemd-logind[1440]: New session 2 of user core. Apr 17 23:33:12.399260 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:33:12.452118 sshd[1571]: pam_unix(sshd:session): session closed for user core Apr 17 23:33:12.470279 systemd[1]: sshd@1-10.0.0.43:22-10.0.0.1:36962.service: Deactivated successfully. Apr 17 23:33:12.471491 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:33:12.472500 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:33:12.473387 systemd[1]: Started sshd@2-10.0.0.43:22-10.0.0.1:36968.service - OpenSSH per-connection server daemon (10.0.0.1:36968). Apr 17 23:33:12.474095 systemd-logind[1440]: Removed session 2. Apr 17 23:33:12.502980 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 36968 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:33:12.504427 sshd[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:33:12.508099 systemd-logind[1440]: New session 3 of user core. Apr 17 23:33:12.518275 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:33:12.569562 sshd[1578]: pam_unix(sshd:session): session closed for user core Apr 17 23:33:12.584325 systemd[1]: sshd@2-10.0.0.43:22-10.0.0.1:36968.service: Deactivated successfully. Apr 17 23:33:12.585528 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:33:12.586544 systemd-logind[1440]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:33:12.587754 systemd[1]: Started sshd@3-10.0.0.43:22-10.0.0.1:36974.service - OpenSSH per-connection server daemon (10.0.0.1:36974). Apr 17 23:33:12.588359 systemd-logind[1440]: Removed session 3. Apr 17 23:33:12.620087 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 36974 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:33:12.621276 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:33:12.626492 systemd-logind[1440]: New session 4 of user core. Apr 17 23:33:12.636184 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:33:12.691155 sshd[1585]: pam_unix(sshd:session): session closed for user core Apr 17 23:33:12.703643 systemd[1]: sshd@3-10.0.0.43:22-10.0.0.1:36974.service: Deactivated successfully. Apr 17 23:33:12.704821 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:33:12.706191 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:33:12.707108 systemd[1]: Started sshd@4-10.0.0.43:22-10.0.0.1:36986.service - OpenSSH per-connection server daemon (10.0.0.1:36986). Apr 17 23:33:12.708077 systemd-logind[1440]: Removed session 4. Apr 17 23:33:12.737764 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 36986 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:33:12.738893 sshd[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:33:12.742813 systemd-logind[1440]: New session 5 of user core. Apr 17 23:33:12.758244 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:33:12.820689 sudo[1595]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:33:12.820959 sudo[1595]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:33:12.840330 sudo[1595]: pam_unix(sudo:session): session closed for user root Apr 17 23:33:12.843164 sshd[1592]: pam_unix(sshd:session): session closed for user core Apr 17 23:33:12.849064 systemd[1]: sshd@4-10.0.0.43:22-10.0.0.1:36986.service: Deactivated successfully. Apr 17 23:33:12.850308 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:33:12.851310 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:33:12.863311 systemd[1]: Started sshd@5-10.0.0.43:22-10.0.0.1:36990.service - OpenSSH per-connection server daemon (10.0.0.1:36990). Apr 17 23:33:12.864247 systemd-logind[1440]: Removed session 5. Apr 17 23:33:12.889731 sshd[1600]: Accepted publickey for core from 10.0.0.1 port 36990 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:33:12.890684 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:33:12.894819 systemd-logind[1440]: New session 6 of user core. Apr 17 23:33:12.908217 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:33:12.959766 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:33:12.959978 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:33:12.963532 sudo[1604]: pam_unix(sudo:session): session closed for user root Apr 17 23:33:12.967665 sudo[1603]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:33:12.967866 sudo[1603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:33:12.984283 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:33:12.985847 auditctl[1607]: No rules Apr 17 23:33:12.986141 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:33:12.986313 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:33:12.988454 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:33:13.010988 augenrules[1625]: No rules Apr 17 23:33:13.011944 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:33:13.012721 sudo[1603]: pam_unix(sudo:session): session closed for user root Apr 17 23:33:13.014163 sshd[1600]: pam_unix(sshd:session): session closed for user core Apr 17 23:33:13.025944 systemd[1]: sshd@5-10.0.0.43:22-10.0.0.1:36990.service: Deactivated successfully. Apr 17 23:33:13.027119 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:33:13.028060 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:33:13.028885 systemd[1]: Started sshd@6-10.0.0.43:22-10.0.0.1:36992.service - OpenSSH per-connection server daemon (10.0.0.1:36992). Apr 17 23:33:13.029827 systemd-logind[1440]: Removed session 6. Apr 17 23:33:13.061622 sshd[1633]: Accepted publickey for core from 10.0.0.1 port 36992 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:33:13.062977 sshd[1633]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:33:13.067645 systemd-logind[1440]: New session 7 of user core. Apr 17 23:33:13.077181 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:33:13.129244 sudo[1636]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:33:13.129456 sudo[1636]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:33:13.364409 (dockerd)[1653]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:33:13.365200 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:33:13.619962 dockerd[1653]: time="2026-04-17T23:33:13.619679261Z" level=info msg="Starting up" Apr 17 23:33:13.833755 dockerd[1653]: time="2026-04-17T23:33:13.833641334Z" level=info msg="Loading containers: start." Apr 17 23:33:13.949233 kernel: Initializing XFRM netlink socket Apr 17 23:33:14.028058 systemd-networkd[1388]: docker0: Link UP Apr 17 23:33:14.051632 dockerd[1653]: time="2026-04-17T23:33:14.051528657Z" level=info msg="Loading containers: done." Apr 17 23:33:14.064723 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3950910839-merged.mount: Deactivated successfully. Apr 17 23:33:14.067075 dockerd[1653]: time="2026-04-17T23:33:14.066843886Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:33:14.067075 dockerd[1653]: time="2026-04-17T23:33:14.066982745Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:33:14.067190 dockerd[1653]: time="2026-04-17T23:33:14.067086805Z" level=info msg="Daemon has completed initialization" Apr 17 23:33:14.106980 dockerd[1653]: time="2026-04-17T23:33:14.106894831Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:33:14.107178 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:33:14.444065 kernel: hrtimer: interrupt took 13459222 ns Apr 17 23:33:14.629789 containerd[1462]: time="2026-04-17T23:33:14.629724058Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 17 23:33:15.456516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1384257327.mount: Deactivated successfully. Apr 17 23:33:16.181725 containerd[1462]: time="2026-04-17T23:33:16.181654888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:16.182347 containerd[1462]: time="2026-04-17T23:33:16.182313792Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193427" Apr 17 23:33:16.183278 containerd[1462]: time="2026-04-17T23:33:16.183233971Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:16.185549 containerd[1462]: time="2026-04-17T23:33:16.185510230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:16.186401 containerd[1462]: time="2026-04-17T23:33:16.186373448Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 1.556608969s" Apr 17 23:33:16.186401 containerd[1462]: time="2026-04-17T23:33:16.186397806Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 17 23:33:16.187074 containerd[1462]: time="2026-04-17T23:33:16.187020050Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 17 23:33:17.091280 containerd[1462]: time="2026-04-17T23:33:17.091208463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:17.092213 containerd[1462]: time="2026-04-17T23:33:17.092013743Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171379" Apr 17 23:33:17.093476 containerd[1462]: time="2026-04-17T23:33:17.093408313Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:17.096064 containerd[1462]: time="2026-04-17T23:33:17.095983421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:17.097046 containerd[1462]: time="2026-04-17T23:33:17.096986716Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 909.941876ms" Apr 17 23:33:17.097091 containerd[1462]: time="2026-04-17T23:33:17.097052009Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 17 23:33:17.097650 containerd[1462]: time="2026-04-17T23:33:17.097601128Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 17 23:33:17.820699 containerd[1462]: time="2026-04-17T23:33:17.820641488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:17.821713 containerd[1462]: time="2026-04-17T23:33:17.821664920Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289688" Apr 17 23:33:17.822709 containerd[1462]: time="2026-04-17T23:33:17.822668301Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:17.825235 containerd[1462]: time="2026-04-17T23:33:17.825196216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:17.826099 containerd[1462]: time="2026-04-17T23:33:17.826062714Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 728.407715ms" Apr 17 23:33:17.826125 containerd[1462]: time="2026-04-17T23:33:17.826097309Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 17 23:33:17.826646 containerd[1462]: time="2026-04-17T23:33:17.826587878Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 17 23:33:18.489226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3575445154.mount: Deactivated successfully. Apr 17 23:33:18.490053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:33:18.496556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:33:18.648453 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:33:18.651813 (kubelet)[1884]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:33:18.691732 kubelet[1884]: E0417 23:33:18.691685 1884 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:33:18.695133 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:33:18.695304 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:33:18.902666 containerd[1462]: time="2026-04-17T23:33:18.902487124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:18.903474 containerd[1462]: time="2026-04-17T23:33:18.903406606Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010605" Apr 17 23:33:18.904510 containerd[1462]: time="2026-04-17T23:33:18.904440455Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:18.907444 containerd[1462]: time="2026-04-17T23:33:18.907275471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:18.908135 containerd[1462]: time="2026-04-17T23:33:18.908098640Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.081463355s" Apr 17 23:33:18.908135 containerd[1462]: time="2026-04-17T23:33:18.908130968Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 17 23:33:18.908627 containerd[1462]: time="2026-04-17T23:33:18.908588943Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 17 23:33:19.309207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3055591800.mount: Deactivated successfully. Apr 17 23:33:19.916386 containerd[1462]: time="2026-04-17T23:33:19.916324192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:19.917432 containerd[1462]: time="2026-04-17T23:33:19.917391441Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 17 23:33:19.924739 containerd[1462]: time="2026-04-17T23:33:19.924680433Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:19.947521 containerd[1462]: time="2026-04-17T23:33:19.947416033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:19.948466 containerd[1462]: time="2026-04-17T23:33:19.948425239Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.039805536s" Apr 17 23:33:19.948466 containerd[1462]: time="2026-04-17T23:33:19.948464170Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 17 23:33:19.949387 containerd[1462]: time="2026-04-17T23:33:19.949260599Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 17 23:33:20.313580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3527259011.mount: Deactivated successfully. Apr 17 23:33:20.319972 containerd[1462]: time="2026-04-17T23:33:20.319711183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:20.320912 containerd[1462]: time="2026-04-17T23:33:20.320845383Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 17 23:33:20.322654 containerd[1462]: time="2026-04-17T23:33:20.322580629Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:20.325257 containerd[1462]: time="2026-04-17T23:33:20.324696541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:20.326226 containerd[1462]: time="2026-04-17T23:33:20.326171711Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 376.877135ms" Apr 17 23:33:20.326226 containerd[1462]: time="2026-04-17T23:33:20.326209500Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 17 23:33:20.326768 containerd[1462]: time="2026-04-17T23:33:20.326736448Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 17 23:33:20.727955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3955510419.mount: Deactivated successfully. Apr 17 23:33:21.337917 containerd[1462]: time="2026-04-17T23:33:21.337833068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:21.338683 containerd[1462]: time="2026-04-17T23:33:21.338625039Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718826" Apr 17 23:33:21.339797 containerd[1462]: time="2026-04-17T23:33:21.339654026Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:21.342622 containerd[1462]: time="2026-04-17T23:33:21.342575195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:21.343441 containerd[1462]: time="2026-04-17T23:33:21.343394362Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.0166342s" Apr 17 23:33:21.343441 containerd[1462]: time="2026-04-17T23:33:21.343436978Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 17 23:33:23.848309 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:33:23.857252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:33:23.882769 systemd[1]: Reloading requested from client PID 2041 ('systemctl') (unit session-7.scope)... Apr 17 23:33:23.882794 systemd[1]: Reloading... Apr 17 23:33:23.936168 zram_generator::config[2076]: No configuration found. Apr 17 23:33:24.019219 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:33:24.066689 systemd[1]: Reloading finished in 183 ms. Apr 17 23:33:24.107287 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:33:24.110402 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:33:24.110752 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:33:24.110967 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:33:24.112430 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:33:24.219521 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:33:24.224336 (kubelet)[2130]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:33:24.264414 kubelet[2130]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:33:24.264414 kubelet[2130]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:33:24.264414 kubelet[2130]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:33:24.265559 kubelet[2130]: I0417 23:33:24.264894 2130 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:33:24.614870 kubelet[2130]: I0417 23:33:24.614828 2130 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:33:24.614870 kubelet[2130]: I0417 23:33:24.614861 2130 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:33:24.615146 kubelet[2130]: I0417 23:33:24.615116 2130 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:33:24.638055 kubelet[2130]: E0417 23:33:24.637980 2130 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:33:24.642025 kubelet[2130]: I0417 23:33:24.641949 2130 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:33:24.647852 kubelet[2130]: E0417 23:33:24.647805 2130 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:33:24.647852 kubelet[2130]: I0417 23:33:24.647842 2130 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:33:24.650879 kubelet[2130]: I0417 23:33:24.650849 2130 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:33:24.651148 kubelet[2130]: I0417 23:33:24.651111 2130 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:33:24.651294 kubelet[2130]: I0417 23:33:24.651142 2130 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:33:24.651294 kubelet[2130]: I0417 23:33:24.651292 2130 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:33:24.651388 kubelet[2130]: I0417 23:33:24.651299 2130 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:33:24.651404 kubelet[2130]: I0417 23:33:24.651387 2130 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:33:24.654634 kubelet[2130]: I0417 23:33:24.654571 2130 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:33:24.654634 kubelet[2130]: I0417 23:33:24.654595 2130 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:33:24.654634 kubelet[2130]: I0417 23:33:24.654623 2130 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:33:24.656130 kubelet[2130]: I0417 23:33:24.656091 2130 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:33:24.659449 kubelet[2130]: E0417 23:33:24.659402 2130 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:33:24.661062 kubelet[2130]: E0417 23:33:24.659675 2130 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:33:24.661062 kubelet[2130]: I0417 23:33:24.659766 2130 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:33:24.661062 kubelet[2130]: I0417 23:33:24.660737 2130 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:33:24.661500 kubelet[2130]: W0417 23:33:24.661448 2130 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:33:24.667771 kubelet[2130]: I0417 23:33:24.667738 2130 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:33:24.668339 kubelet[2130]: I0417 23:33:24.667985 2130 server.go:1289] "Started kubelet" Apr 17 23:33:24.668339 kubelet[2130]: I0417 23:33:24.668257 2130 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:33:24.670386 kubelet[2130]: I0417 23:33:24.669580 2130 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:33:24.670386 kubelet[2130]: I0417 23:33:24.669866 2130 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:33:24.672825 kubelet[2130]: E0417 23:33:24.670248 2130 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.43:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a748f738c928da default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-17 23:33:24.66777929 +0000 UTC m=+0.439497720,LastTimestamp:2026-04-17 23:33:24.66777929 +0000 UTC m=+0.439497720,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 17 23:33:24.672825 kubelet[2130]: I0417 23:33:24.671408 2130 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:33:24.672825 kubelet[2130]: E0417 23:33:24.671693 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:24.672825 kubelet[2130]: I0417 23:33:24.671716 2130 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:33:24.672825 kubelet[2130]: I0417 23:33:24.671880 2130 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:33:24.672825 kubelet[2130]: I0417 23:33:24.672052 2130 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:33:24.672825 kubelet[2130]: E0417 23:33:24.672370 2130 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:33:24.672825 kubelet[2130]: I0417 23:33:24.672428 2130 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:33:24.673179 kubelet[2130]: I0417 23:33:24.672738 2130 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:33:24.673604 kubelet[2130]: E0417 23:33:24.673535 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="200ms" Apr 17 23:33:24.674884 kubelet[2130]: I0417 23:33:24.674732 2130 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:33:24.674884 kubelet[2130]: I0417 23:33:24.674791 2130 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:33:24.675454 kubelet[2130]: I0417 23:33:24.675414 2130 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:33:24.675577 kubelet[2130]: E0417 23:33:24.675549 2130 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:33:24.676836 kubelet[2130]: I0417 23:33:24.676795 2130 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:33:24.686621 kubelet[2130]: I0417 23:33:24.686595 2130 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:33:24.686621 kubelet[2130]: I0417 23:33:24.686612 2130 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:33:24.686621 kubelet[2130]: I0417 23:33:24.686624 2130 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:33:24.731507 kubelet[2130]: I0417 23:33:24.731450 2130 policy_none.go:49] "None policy: Start" Apr 17 23:33:24.731630 kubelet[2130]: I0417 23:33:24.731528 2130 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:33:24.731630 kubelet[2130]: I0417 23:33:24.731555 2130 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:33:24.736060 kubelet[2130]: I0417 23:33:24.736030 2130 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:33:24.736120 kubelet[2130]: I0417 23:33:24.736065 2130 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:33:24.736120 kubelet[2130]: I0417 23:33:24.736087 2130 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:33:24.736120 kubelet[2130]: I0417 23:33:24.736094 2130 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:33:24.736185 kubelet[2130]: E0417 23:33:24.736170 2130 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:33:24.737973 kubelet[2130]: E0417 23:33:24.737941 2130 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:33:24.745973 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:33:24.761412 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:33:24.772282 kubelet[2130]: E0417 23:33:24.772189 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:24.778568 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:33:24.779621 kubelet[2130]: E0417 23:33:24.779599 2130 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:33:24.779833 kubelet[2130]: I0417 23:33:24.779766 2130 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:33:24.779833 kubelet[2130]: I0417 23:33:24.779775 2130 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:33:24.780016 kubelet[2130]: I0417 23:33:24.779953 2130 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:33:24.780779 kubelet[2130]: E0417 23:33:24.780747 2130 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:33:24.780779 kubelet[2130]: E0417 23:33:24.780780 2130 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 17 23:33:24.843873 systemd[1]: Created slice kubepods-burstable-pod5ae3de833062eb7326c7f9ed4798e133.slice - libcontainer container kubepods-burstable-pod5ae3de833062eb7326c7f9ed4798e133.slice. Apr 17 23:33:24.848570 kubelet[2130]: E0417 23:33:24.848522 2130 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:33:24.851392 systemd[1]: Created slice kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice - libcontainer container kubepods-burstable-pode9ca41790ae21be9f4cbd451ade0acec.slice. Apr 17 23:33:24.867130 kubelet[2130]: E0417 23:33:24.867038 2130 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:33:24.869266 systemd[1]: Created slice kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice - libcontainer container kubepods-burstable-pod33fee6ba1581201eda98a989140db110.slice. Apr 17 23:33:24.870747 kubelet[2130]: E0417 23:33:24.870713 2130 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:33:24.874228 kubelet[2130]: E0417 23:33:24.874191 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="400ms" Apr 17 23:33:24.881452 kubelet[2130]: I0417 23:33:24.881259 2130 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:33:24.881544 kubelet[2130]: E0417 23:33:24.881514 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Apr 17 23:33:24.973192 kubelet[2130]: I0417 23:33:24.973150 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:33:24.973346 kubelet[2130]: I0417 23:33:24.973201 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:33:24.973346 kubelet[2130]: I0417 23:33:24.973223 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:33:24.973346 kubelet[2130]: I0417 23:33:24.973244 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ae3de833062eb7326c7f9ed4798e133-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ae3de833062eb7326c7f9ed4798e133\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:33:24.973346 kubelet[2130]: I0417 23:33:24.973264 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ae3de833062eb7326c7f9ed4798e133-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5ae3de833062eb7326c7f9ed4798e133\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:33:24.973346 kubelet[2130]: I0417 23:33:24.973283 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:33:24.973469 kubelet[2130]: I0417 23:33:24.973299 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:33:24.973469 kubelet[2130]: I0417 23:33:24.973322 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 17 23:33:24.973741 kubelet[2130]: I0417 23:33:24.973711 2130 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ae3de833062eb7326c7f9ed4798e133-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ae3de833062eb7326c7f9ed4798e133\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:33:25.083347 kubelet[2130]: I0417 23:33:25.083236 2130 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:33:25.083936 kubelet[2130]: E0417 23:33:25.083879 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Apr 17 23:33:25.149563 kubelet[2130]: E0417 23:33:25.149355 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:25.150453 containerd[1462]: time="2026-04-17T23:33:25.150216726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5ae3de833062eb7326c7f9ed4798e133,Namespace:kube-system,Attempt:0,}" Apr 17 23:33:25.168255 kubelet[2130]: E0417 23:33:25.168223 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:25.168728 containerd[1462]: time="2026-04-17T23:33:25.168698740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,}" Apr 17 23:33:25.171107 kubelet[2130]: E0417 23:33:25.171056 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:25.171398 containerd[1462]: time="2026-04-17T23:33:25.171377032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,}" Apr 17 23:33:25.275333 kubelet[2130]: E0417 23:33:25.275245 2130 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="800ms" Apr 17 23:33:25.485061 kubelet[2130]: I0417 23:33:25.484946 2130 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:33:25.485329 kubelet[2130]: E0417 23:33:25.485291 2130 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" Apr 17 23:33:25.545973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240556707.mount: Deactivated successfully. Apr 17 23:33:25.550191 containerd[1462]: time="2026-04-17T23:33:25.550135021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:33:25.550990 containerd[1462]: time="2026-04-17T23:33:25.550882470Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 17 23:33:25.553339 containerd[1462]: time="2026-04-17T23:33:25.553284436Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:33:25.554463 containerd[1462]: time="2026-04-17T23:33:25.554417077Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:33:25.555040 containerd[1462]: time="2026-04-17T23:33:25.554944219Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:33:25.555992 containerd[1462]: time="2026-04-17T23:33:25.555908048Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:33:25.556790 containerd[1462]: time="2026-04-17T23:33:25.556747614Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:33:25.557691 containerd[1462]: time="2026-04-17T23:33:25.557646987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:33:25.558135 containerd[1462]: time="2026-04-17T23:33:25.558072106Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 407.752354ms" Apr 17 23:33:25.560715 containerd[1462]: time="2026-04-17T23:33:25.560687732Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 391.925727ms" Apr 17 23:33:25.565342 containerd[1462]: time="2026-04-17T23:33:25.565301217Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 393.812779ms" Apr 17 23:33:25.658528 containerd[1462]: time="2026-04-17T23:33:25.658413223Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:33:25.658693 containerd[1462]: time="2026-04-17T23:33:25.658475915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:33:25.658693 containerd[1462]: time="2026-04-17T23:33:25.658488172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:33:25.659069 containerd[1462]: time="2026-04-17T23:33:25.658962695Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:33:25.659978 containerd[1462]: time="2026-04-17T23:33:25.659899793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:33:25.659978 containerd[1462]: time="2026-04-17T23:33:25.659953539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:33:25.660124 containerd[1462]: time="2026-04-17T23:33:25.659972764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:33:25.660124 containerd[1462]: time="2026-04-17T23:33:25.660102246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:33:25.663135 containerd[1462]: time="2026-04-17T23:33:25.663032023Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:33:25.663335 containerd[1462]: time="2026-04-17T23:33:25.663146945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:33:25.663335 containerd[1462]: time="2026-04-17T23:33:25.663169048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:33:25.663335 containerd[1462]: time="2026-04-17T23:33:25.663224447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:33:25.681237 systemd[1]: Started cri-containerd-bd5cd1621b8395af1159397d6b5f152eeca30026681092acb7ac4a18e098b6ae.scope - libcontainer container bd5cd1621b8395af1159397d6b5f152eeca30026681092acb7ac4a18e098b6ae. Apr 17 23:33:25.682426 kubelet[2130]: E0417 23:33:25.682251 2130 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:33:25.682618 systemd[1]: Started cri-containerd-f115ed8c9c2f1356a87361eefd83527bd7591ab187c552743778a3cb1f9e7bdf.scope - libcontainer container f115ed8c9c2f1356a87361eefd83527bd7591ab187c552743778a3cb1f9e7bdf. Apr 17 23:33:25.687371 systemd[1]: Started cri-containerd-a4ec29956d9611946e26ba19a6a254383438e9f8a449b615c902514ac093cc7a.scope - libcontainer container a4ec29956d9611946e26ba19a6a254383438e9f8a449b615c902514ac093cc7a. Apr 17 23:33:25.709504 kubelet[2130]: E0417 23:33:25.709451 2130 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:33:25.726974 containerd[1462]: time="2026-04-17T23:33:25.726938691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5ae3de833062eb7326c7f9ed4798e133,Namespace:kube-system,Attempt:0,} returns sandbox id \"f115ed8c9c2f1356a87361eefd83527bd7591ab187c552743778a3cb1f9e7bdf\"" Apr 17 23:33:25.729446 kubelet[2130]: E0417 23:33:25.729269 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:25.730830 containerd[1462]: time="2026-04-17T23:33:25.730631050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:33fee6ba1581201eda98a989140db110,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd5cd1621b8395af1159397d6b5f152eeca30026681092acb7ac4a18e098b6ae\"" Apr 17 23:33:25.730830 containerd[1462]: time="2026-04-17T23:33:25.730741613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ca41790ae21be9f4cbd451ade0acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"a4ec29956d9611946e26ba19a6a254383438e9f8a449b615c902514ac093cc7a\"" Apr 17 23:33:25.731715 kubelet[2130]: E0417 23:33:25.731264 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:25.731966 kubelet[2130]: E0417 23:33:25.731950 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:25.734608 containerd[1462]: time="2026-04-17T23:33:25.734565541Z" level=info msg="CreateContainer within sandbox \"f115ed8c9c2f1356a87361eefd83527bd7591ab187c552743778a3cb1f9e7bdf\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:33:25.736303 containerd[1462]: time="2026-04-17T23:33:25.736237497Z" level=info msg="CreateContainer within sandbox \"bd5cd1621b8395af1159397d6b5f152eeca30026681092acb7ac4a18e098b6ae\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:33:25.738417 containerd[1462]: time="2026-04-17T23:33:25.738398758Z" level=info msg="CreateContainer within sandbox \"a4ec29956d9611946e26ba19a6a254383438e9f8a449b615c902514ac093cc7a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:33:25.755362 containerd[1462]: time="2026-04-17T23:33:25.755296208Z" level=info msg="CreateContainer within sandbox \"bd5cd1621b8395af1159397d6b5f152eeca30026681092acb7ac4a18e098b6ae\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"504100e2d30f6eddfb476319c0f08471668843e17c61dc8f5994458b5b6df4fe\"" Apr 17 23:33:25.757082 containerd[1462]: time="2026-04-17T23:33:25.756406216Z" level=info msg="StartContainer for \"504100e2d30f6eddfb476319c0f08471668843e17c61dc8f5994458b5b6df4fe\"" Apr 17 23:33:25.760808 containerd[1462]: time="2026-04-17T23:33:25.760717693Z" level=info msg="CreateContainer within sandbox \"a4ec29956d9611946e26ba19a6a254383438e9f8a449b615c902514ac093cc7a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"86f4c29fface3396f070672bad969dd0152b8a01d848ab412a6f51571ff4a39b\"" Apr 17 23:33:25.761628 containerd[1462]: time="2026-04-17T23:33:25.761170798Z" level=info msg="StartContainer for \"86f4c29fface3396f070672bad969dd0152b8a01d848ab412a6f51571ff4a39b\"" Apr 17 23:33:25.761628 containerd[1462]: time="2026-04-17T23:33:25.761523047Z" level=info msg="CreateContainer within sandbox \"f115ed8c9c2f1356a87361eefd83527bd7591ab187c552743778a3cb1f9e7bdf\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4c43b784d723f96949278880ce7feb4f4e79905211c22413daa18e9c7ff3f807\"" Apr 17 23:33:25.762727 containerd[1462]: time="2026-04-17T23:33:25.761968870Z" level=info msg="StartContainer for \"4c43b784d723f96949278880ce7feb4f4e79905211c22413daa18e9c7ff3f807\"" Apr 17 23:33:25.790237 systemd[1]: Started cri-containerd-4c43b784d723f96949278880ce7feb4f4e79905211c22413daa18e9c7ff3f807.scope - libcontainer container 4c43b784d723f96949278880ce7feb4f4e79905211c22413daa18e9c7ff3f807. Apr 17 23:33:25.791303 systemd[1]: Started cri-containerd-504100e2d30f6eddfb476319c0f08471668843e17c61dc8f5994458b5b6df4fe.scope - libcontainer container 504100e2d30f6eddfb476319c0f08471668843e17c61dc8f5994458b5b6df4fe. Apr 17 23:33:25.796250 systemd[1]: Started cri-containerd-86f4c29fface3396f070672bad969dd0152b8a01d848ab412a6f51571ff4a39b.scope - libcontainer container 86f4c29fface3396f070672bad969dd0152b8a01d848ab412a6f51571ff4a39b. Apr 17 23:33:25.822435 kubelet[2130]: E0417 23:33:25.822372 2130 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:33:25.835811 containerd[1462]: time="2026-04-17T23:33:25.835700561Z" level=info msg="StartContainer for \"504100e2d30f6eddfb476319c0f08471668843e17c61dc8f5994458b5b6df4fe\" returns successfully" Apr 17 23:33:25.844024 containerd[1462]: time="2026-04-17T23:33:25.843961956Z" level=info msg="StartContainer for \"86f4c29fface3396f070672bad969dd0152b8a01d848ab412a6f51571ff4a39b\" returns successfully" Apr 17 23:33:25.844284 containerd[1462]: time="2026-04-17T23:33:25.844112804Z" level=info msg="StartContainer for \"4c43b784d723f96949278880ce7feb4f4e79905211c22413daa18e9c7ff3f807\" returns successfully" Apr 17 23:33:26.289625 kubelet[2130]: I0417 23:33:26.289571 2130 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:33:26.630339 kubelet[2130]: E0417 23:33:26.630189 2130 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 17 23:33:26.744710 kubelet[2130]: E0417 23:33:26.744645 2130 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:33:26.744831 kubelet[2130]: E0417 23:33:26.744774 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:26.745507 kubelet[2130]: E0417 23:33:26.745483 2130 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:33:26.745630 kubelet[2130]: E0417 23:33:26.745569 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:26.747034 kubelet[2130]: E0417 23:33:26.746961 2130 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:33:26.747080 kubelet[2130]: E0417 23:33:26.747067 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:26.811601 kubelet[2130]: I0417 23:33:26.811543 2130 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 23:33:26.811601 kubelet[2130]: E0417 23:33:26.811592 2130 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Apr 17 23:33:26.821813 kubelet[2130]: E0417 23:33:26.821762 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:26.922406 kubelet[2130]: E0417 23:33:26.922180 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:27.023241 kubelet[2130]: E0417 23:33:27.023170 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:27.123685 kubelet[2130]: E0417 23:33:27.123606 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:27.224917 kubelet[2130]: E0417 23:33:27.224724 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:27.325059 kubelet[2130]: E0417 23:33:27.324930 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:27.425929 kubelet[2130]: E0417 23:33:27.425836 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:27.527060 kubelet[2130]: E0417 23:33:27.526660 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:27.627359 kubelet[2130]: E0417 23:33:27.627293 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:27.728336 kubelet[2130]: E0417 23:33:27.728250 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:27.749152 kubelet[2130]: E0417 23:33:27.749100 2130 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:33:27.749272 kubelet[2130]: E0417 23:33:27.749230 2130 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:33:27.749272 kubelet[2130]: E0417 23:33:27.749240 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:27.749333 kubelet[2130]: E0417 23:33:27.749308 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:27.749398 kubelet[2130]: E0417 23:33:27.749376 2130 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 17 23:33:27.749474 kubelet[2130]: E0417 23:33:27.749456 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:27.829332 kubelet[2130]: E0417 23:33:27.829159 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:27.930307 kubelet[2130]: E0417 23:33:27.930233 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:28.031273 kubelet[2130]: E0417 23:33:28.031215 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:28.132457 kubelet[2130]: E0417 23:33:28.132245 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:28.233237 kubelet[2130]: E0417 23:33:28.233149 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:28.334251 kubelet[2130]: E0417 23:33:28.334183 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:28.437810 kubelet[2130]: E0417 23:33:28.436106 2130 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:28.473839 kubelet[2130]: I0417 23:33:28.473533 2130 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:33:28.487616 kubelet[2130]: I0417 23:33:28.487519 2130 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:33:28.492171 kubelet[2130]: I0417 23:33:28.492121 2130 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:33:28.660147 kubelet[2130]: I0417 23:33:28.660115 2130 apiserver.go:52] "Watching apiserver" Apr 17 23:33:28.664079 systemd[1]: Reloading requested from client PID 2420 ('systemctl') (unit session-7.scope)... Apr 17 23:33:28.664103 systemd[1]: Reloading... Apr 17 23:33:28.672714 kubelet[2130]: I0417 23:33:28.672648 2130 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:33:28.716068 zram_generator::config[2459]: No configuration found. Apr 17 23:33:28.749255 kubelet[2130]: I0417 23:33:28.749232 2130 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:33:28.749560 kubelet[2130]: E0417 23:33:28.749410 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:28.749606 kubelet[2130]: I0417 23:33:28.749562 2130 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:33:28.760336 kubelet[2130]: E0417 23:33:28.759936 2130 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 17 23:33:28.760336 kubelet[2130]: E0417 23:33:28.760155 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:28.760336 kubelet[2130]: E0417 23:33:28.760339 2130 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 23:33:28.760810 kubelet[2130]: E0417 23:33:28.760719 2130 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:28.800796 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:33:28.859172 systemd[1]: Reloading finished in 194 ms. Apr 17 23:33:28.890211 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:33:28.909153 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:33:28.909360 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:33:28.924476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:33:29.039336 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:33:29.045975 (kubelet)[2504]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:33:29.092019 kubelet[2504]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:33:29.092019 kubelet[2504]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:33:29.092019 kubelet[2504]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:33:29.092395 kubelet[2504]: I0417 23:33:29.092064 2504 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:33:29.097748 kubelet[2504]: I0417 23:33:29.097700 2504 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:33:29.097748 kubelet[2504]: I0417 23:33:29.097734 2504 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:33:29.097906 kubelet[2504]: I0417 23:33:29.097887 2504 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:33:29.099143 kubelet[2504]: I0417 23:33:29.099111 2504 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:33:29.101932 kubelet[2504]: I0417 23:33:29.101880 2504 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:33:29.104566 kubelet[2504]: E0417 23:33:29.104514 2504 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:33:29.104566 kubelet[2504]: I0417 23:33:29.104543 2504 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:33:29.108183 kubelet[2504]: I0417 23:33:29.108156 2504 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:33:29.108344 kubelet[2504]: I0417 23:33:29.108297 2504 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:33:29.108487 kubelet[2504]: I0417 23:33:29.108324 2504 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:33:29.108595 kubelet[2504]: I0417 23:33:29.108496 2504 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:33:29.108595 kubelet[2504]: I0417 23:33:29.108504 2504 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:33:29.108595 kubelet[2504]: I0417 23:33:29.108538 2504 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:33:29.108697 kubelet[2504]: I0417 23:33:29.108668 2504 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:33:29.108750 kubelet[2504]: I0417 23:33:29.108707 2504 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:33:29.108750 kubelet[2504]: I0417 23:33:29.108725 2504 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:33:29.108750 kubelet[2504]: I0417 23:33:29.108735 2504 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:33:29.109717 kubelet[2504]: I0417 23:33:29.109595 2504 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:33:29.110108 kubelet[2504]: I0417 23:33:29.110030 2504 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:33:29.112930 kubelet[2504]: I0417 23:33:29.112861 2504 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:33:29.112930 kubelet[2504]: I0417 23:33:29.112929 2504 server.go:1289] "Started kubelet" Apr 17 23:33:29.113657 kubelet[2504]: I0417 23:33:29.113608 2504 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:33:29.113969 kubelet[2504]: I0417 23:33:29.113903 2504 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:33:29.114284 kubelet[2504]: I0417 23:33:29.114261 2504 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:33:29.115062 kubelet[2504]: I0417 23:33:29.114960 2504 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:33:29.115356 kubelet[2504]: I0417 23:33:29.113598 2504 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:33:29.115761 kubelet[2504]: I0417 23:33:29.115742 2504 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:33:29.116812 kubelet[2504]: I0417 23:33:29.116775 2504 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:33:29.117156 kubelet[2504]: E0417 23:33:29.116916 2504 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 17 23:33:29.117308 kubelet[2504]: I0417 23:33:29.117240 2504 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:33:29.117413 kubelet[2504]: I0417 23:33:29.117388 2504 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:33:29.118088 kubelet[2504]: I0417 23:33:29.118048 2504 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:33:29.118178 kubelet[2504]: I0417 23:33:29.118132 2504 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:33:29.120786 kubelet[2504]: I0417 23:33:29.120775 2504 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:33:29.127120 kubelet[2504]: E0417 23:33:29.127066 2504 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:33:29.130325 kubelet[2504]: I0417 23:33:29.130272 2504 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:33:29.133723 kubelet[2504]: I0417 23:33:29.133342 2504 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:33:29.133723 kubelet[2504]: I0417 23:33:29.133361 2504 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:33:29.133723 kubelet[2504]: I0417 23:33:29.133377 2504 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:33:29.133723 kubelet[2504]: I0417 23:33:29.133383 2504 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:33:29.133723 kubelet[2504]: E0417 23:33:29.133414 2504 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:33:29.163668 kubelet[2504]: I0417 23:33:29.163618 2504 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:33:29.163668 kubelet[2504]: I0417 23:33:29.163647 2504 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:33:29.163872 kubelet[2504]: I0417 23:33:29.163759 2504 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:33:29.163912 kubelet[2504]: I0417 23:33:29.163903 2504 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:33:29.163930 kubelet[2504]: I0417 23:33:29.163912 2504 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:33:29.163948 kubelet[2504]: I0417 23:33:29.163931 2504 policy_none.go:49] "None policy: Start" Apr 17 23:33:29.163948 kubelet[2504]: I0417 23:33:29.163944 2504 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:33:29.163976 kubelet[2504]: I0417 23:33:29.163954 2504 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:33:29.164133 kubelet[2504]: I0417 23:33:29.164098 2504 state_mem.go:75] "Updated machine memory state" Apr 17 23:33:29.168912 kubelet[2504]: E0417 23:33:29.168178 2504 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:33:29.168912 kubelet[2504]: I0417 23:33:29.168639 2504 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:33:29.168912 kubelet[2504]: I0417 23:33:29.168655 2504 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:33:29.168912 kubelet[2504]: I0417 23:33:29.168851 2504 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:33:29.170982 kubelet[2504]: E0417 23:33:29.170901 2504 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:33:29.235578 kubelet[2504]: I0417 23:33:29.235495 2504 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:33:29.235578 kubelet[2504]: I0417 23:33:29.235589 2504 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:33:29.235839 kubelet[2504]: I0417 23:33:29.235668 2504 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:33:29.246605 kubelet[2504]: E0417 23:33:29.246570 2504 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 23:33:29.247209 kubelet[2504]: E0417 23:33:29.247171 2504 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 17 23:33:29.247477 kubelet[2504]: E0417 23:33:29.247452 2504 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 17 23:33:29.274268 kubelet[2504]: I0417 23:33:29.274219 2504 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 17 23:33:29.283616 kubelet[2504]: I0417 23:33:29.283583 2504 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 17 23:33:29.283802 kubelet[2504]: I0417 23:33:29.283718 2504 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 17 23:33:29.318326 kubelet[2504]: I0417 23:33:29.318130 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ae3de833062eb7326c7f9ed4798e133-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ae3de833062eb7326c7f9ed4798e133\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:33:29.318326 kubelet[2504]: I0417 23:33:29.318176 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ae3de833062eb7326c7f9ed4798e133-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5ae3de833062eb7326c7f9ed4798e133\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:33:29.318326 kubelet[2504]: I0417 23:33:29.318202 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:33:29.318326 kubelet[2504]: I0417 23:33:29.318248 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:33:29.318326 kubelet[2504]: I0417 23:33:29.318318 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/33fee6ba1581201eda98a989140db110-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"33fee6ba1581201eda98a989140db110\") " pod="kube-system/kube-scheduler-localhost" Apr 17 23:33:29.318643 kubelet[2504]: I0417 23:33:29.318345 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ae3de833062eb7326c7f9ed4798e133-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5ae3de833062eb7326c7f9ed4798e133\") " pod="kube-system/kube-apiserver-localhost" Apr 17 23:33:29.318643 kubelet[2504]: I0417 23:33:29.318435 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:33:29.318643 kubelet[2504]: I0417 23:33:29.318449 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:33:29.318643 kubelet[2504]: I0417 23:33:29.318483 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ca41790ae21be9f4cbd451ade0acec-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ca41790ae21be9f4cbd451ade0acec\") " pod="kube-system/kube-controller-manager-localhost" Apr 17 23:33:29.547215 kubelet[2504]: E0417 23:33:29.547134 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:29.547945 kubelet[2504]: E0417 23:33:29.547866 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:29.548088 kubelet[2504]: E0417 23:33:29.547433 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:30.109904 kubelet[2504]: I0417 23:33:30.109857 2504 apiserver.go:52] "Watching apiserver" Apr 17 23:33:30.118070 kubelet[2504]: I0417 23:33:30.118030 2504 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:33:30.153199 kubelet[2504]: I0417 23:33:30.151817 2504 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 17 23:33:30.153199 kubelet[2504]: I0417 23:33:30.151891 2504 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 17 23:33:30.153466 kubelet[2504]: E0417 23:33:30.153275 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:30.167537 kubelet[2504]: E0417 23:33:30.166755 2504 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 17 23:33:30.167537 kubelet[2504]: E0417 23:33:30.167065 2504 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 17 23:33:30.167537 kubelet[2504]: E0417 23:33:30.167167 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:30.167537 kubelet[2504]: E0417 23:33:30.167197 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:30.232222 kubelet[2504]: I0417 23:33:30.232122 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.232102733 podStartE2EDuration="2.232102733s" podCreationTimestamp="2026-04-17 23:33:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:33:30.224162194 +0000 UTC m=+1.173435796" watchObservedRunningTime="2026-04-17 23:33:30.232102733 +0000 UTC m=+1.181376345" Apr 17 23:33:30.244844 kubelet[2504]: I0417 23:33:30.244778 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.244755311 podStartE2EDuration="2.244755311s" podCreationTimestamp="2026-04-17 23:33:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:33:30.232307043 +0000 UTC m=+1.181580644" watchObservedRunningTime="2026-04-17 23:33:30.244755311 +0000 UTC m=+1.194028912" Apr 17 23:33:31.152778 kubelet[2504]: E0417 23:33:31.152603 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:31.152778 kubelet[2504]: E0417 23:33:31.152726 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:31.153306 kubelet[2504]: E0417 23:33:31.152948 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:34.838463 kubelet[2504]: E0417 23:33:34.838423 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:35.674794 kubelet[2504]: I0417 23:33:35.674755 2504 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:33:35.675210 containerd[1462]: time="2026-04-17T23:33:35.675169689Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:33:35.675444 kubelet[2504]: I0417 23:33:35.675363 2504 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:33:36.648254 kubelet[2504]: I0417 23:33:36.646887 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=8.646864749 podStartE2EDuration="8.646864749s" podCreationTimestamp="2026-04-17 23:33:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:33:30.245947365 +0000 UTC m=+1.195220974" watchObservedRunningTime="2026-04-17 23:33:36.646864749 +0000 UTC m=+7.596138370" Apr 17 23:33:36.663333 systemd[1]: Created slice kubepods-besteffort-podf3a3c76a_3b96_41ec_90c8_991162d989a6.slice - libcontainer container kubepods-besteffort-podf3a3c76a_3b96_41ec_90c8_991162d989a6.slice. Apr 17 23:33:36.672269 kubelet[2504]: I0417 23:33:36.672221 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f3a3c76a-3b96-41ec-90c8-991162d989a6-kube-proxy\") pod \"kube-proxy-bkmck\" (UID: \"f3a3c76a-3b96-41ec-90c8-991162d989a6\") " pod="kube-system/kube-proxy-bkmck" Apr 17 23:33:36.672269 kubelet[2504]: I0417 23:33:36.672260 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3a3c76a-3b96-41ec-90c8-991162d989a6-xtables-lock\") pod \"kube-proxy-bkmck\" (UID: \"f3a3c76a-3b96-41ec-90c8-991162d989a6\") " pod="kube-system/kube-proxy-bkmck" Apr 17 23:33:36.672415 kubelet[2504]: I0417 23:33:36.672276 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3a3c76a-3b96-41ec-90c8-991162d989a6-lib-modules\") pod \"kube-proxy-bkmck\" (UID: \"f3a3c76a-3b96-41ec-90c8-991162d989a6\") " pod="kube-system/kube-proxy-bkmck" Apr 17 23:33:36.672415 kubelet[2504]: I0417 23:33:36.672288 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwxxs\" (UniqueName: \"kubernetes.io/projected/f3a3c76a-3b96-41ec-90c8-991162d989a6-kube-api-access-qwxxs\") pod \"kube-proxy-bkmck\" (UID: \"f3a3c76a-3b96-41ec-90c8-991162d989a6\") " pod="kube-system/kube-proxy-bkmck" Apr 17 23:33:36.773103 systemd[1]: Created slice kubepods-besteffort-pod297d3004_fcdc_467d_92f5_64a6b00c8019.slice - libcontainer container kubepods-besteffort-pod297d3004_fcdc_467d_92f5_64a6b00c8019.slice. Apr 17 23:33:36.873946 kubelet[2504]: I0417 23:33:36.873816 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrvrd\" (UniqueName: \"kubernetes.io/projected/297d3004-fcdc-467d-92f5-64a6b00c8019-kube-api-access-xrvrd\") pod \"tigera-operator-6bf85f8dd-6sc4x\" (UID: \"297d3004-fcdc-467d-92f5-64a6b00c8019\") " pod="tigera-operator/tigera-operator-6bf85f8dd-6sc4x" Apr 17 23:33:36.873946 kubelet[2504]: I0417 23:33:36.873904 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/297d3004-fcdc-467d-92f5-64a6b00c8019-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-6sc4x\" (UID: \"297d3004-fcdc-467d-92f5-64a6b00c8019\") " pod="tigera-operator/tigera-operator-6bf85f8dd-6sc4x" Apr 17 23:33:36.975511 kubelet[2504]: E0417 23:33:36.975291 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:36.976398 containerd[1462]: time="2026-04-17T23:33:36.976263728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bkmck,Uid:f3a3c76a-3b96-41ec-90c8-991162d989a6,Namespace:kube-system,Attempt:0,}" Apr 17 23:33:36.980280 kubelet[2504]: E0417 23:33:36.980186 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:37.003704 containerd[1462]: time="2026-04-17T23:33:37.003390127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:33:37.004135 containerd[1462]: time="2026-04-17T23:33:37.004081479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:33:37.004135 containerd[1462]: time="2026-04-17T23:33:37.004105132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:33:37.004252 containerd[1462]: time="2026-04-17T23:33:37.004169948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:33:37.025285 systemd[1]: Started cri-containerd-fbaa5d82f75a13c02efb828f0a718a1c2cd30e3fa6048f742e9f167e01adb560.scope - libcontainer container fbaa5d82f75a13c02efb828f0a718a1c2cd30e3fa6048f742e9f167e01adb560. Apr 17 23:33:37.045383 containerd[1462]: time="2026-04-17T23:33:37.045320739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bkmck,Uid:f3a3c76a-3b96-41ec-90c8-991162d989a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbaa5d82f75a13c02efb828f0a718a1c2cd30e3fa6048f742e9f167e01adb560\"" Apr 17 23:33:37.046149 kubelet[2504]: E0417 23:33:37.046126 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:37.050490 containerd[1462]: time="2026-04-17T23:33:37.050445179Z" level=info msg="CreateContainer within sandbox \"fbaa5d82f75a13c02efb828f0a718a1c2cd30e3fa6048f742e9f167e01adb560\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:33:37.067965 containerd[1462]: time="2026-04-17T23:33:37.067901928Z" level=info msg="CreateContainer within sandbox \"fbaa5d82f75a13c02efb828f0a718a1c2cd30e3fa6048f742e9f167e01adb560\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"08aab77126ca49dabac274ecb9ba9e2e4a3ad1509ca96476b13d377326c21811\"" Apr 17 23:33:37.068577 containerd[1462]: time="2026-04-17T23:33:37.068539991Z" level=info msg="StartContainer for \"08aab77126ca49dabac274ecb9ba9e2e4a3ad1509ca96476b13d377326c21811\"" Apr 17 23:33:37.075423 containerd[1462]: time="2026-04-17T23:33:37.075376796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-6sc4x,Uid:297d3004-fcdc-467d-92f5-64a6b00c8019,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:33:37.098216 systemd[1]: Started cri-containerd-08aab77126ca49dabac274ecb9ba9e2e4a3ad1509ca96476b13d377326c21811.scope - libcontainer container 08aab77126ca49dabac274ecb9ba9e2e4a3ad1509ca96476b13d377326c21811. Apr 17 23:33:37.121174 containerd[1462]: time="2026-04-17T23:33:37.120971440Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:33:37.121174 containerd[1462]: time="2026-04-17T23:33:37.121116798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:33:37.121174 containerd[1462]: time="2026-04-17T23:33:37.121127249Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:33:37.121599 containerd[1462]: time="2026-04-17T23:33:37.121562239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:33:37.128719 containerd[1462]: time="2026-04-17T23:33:37.128208618Z" level=info msg="StartContainer for \"08aab77126ca49dabac274ecb9ba9e2e4a3ad1509ca96476b13d377326c21811\" returns successfully" Apr 17 23:33:37.146183 systemd[1]: Started cri-containerd-b9d62ba0ad8864038bb04ca2bbf0b4dabf318725e85296597c5e74c36ef6bec7.scope - libcontainer container b9d62ba0ad8864038bb04ca2bbf0b4dabf318725e85296597c5e74c36ef6bec7. Apr 17 23:33:37.164970 kubelet[2504]: E0417 23:33:37.164876 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:37.165578 kubelet[2504]: E0417 23:33:37.165371 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:37.189981 containerd[1462]: time="2026-04-17T23:33:37.189888183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-6sc4x,Uid:297d3004-fcdc-467d-92f5-64a6b00c8019,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b9d62ba0ad8864038bb04ca2bbf0b4dabf318725e85296597c5e74c36ef6bec7\"" Apr 17 23:33:37.193118 containerd[1462]: time="2026-04-17T23:33:37.193085600Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:33:39.202333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount786962041.mount: Deactivated successfully. Apr 17 23:33:40.728636 kubelet[2504]: E0417 23:33:40.728551 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:40.743424 kubelet[2504]: I0417 23:33:40.743132 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bkmck" podStartSLOduration=4.743114606 podStartE2EDuration="4.743114606s" podCreationTimestamp="2026-04-17 23:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:33:37.19198262 +0000 UTC m=+8.141256226" watchObservedRunningTime="2026-04-17 23:33:40.743114606 +0000 UTC m=+11.692388218" Apr 17 23:33:41.368300 containerd[1462]: time="2026-04-17T23:33:41.368225726Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:41.369491 containerd[1462]: time="2026-04-17T23:33:41.369436769Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 17 23:33:41.370724 containerd[1462]: time="2026-04-17T23:33:41.370641996Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:41.372648 containerd[1462]: time="2026-04-17T23:33:41.372603096Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:41.373279 containerd[1462]: time="2026-04-17T23:33:41.373248687Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 4.180126616s" Apr 17 23:33:41.373331 containerd[1462]: time="2026-04-17T23:33:41.373281241Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 17 23:33:41.377772 containerd[1462]: time="2026-04-17T23:33:41.377734747Z" level=info msg="CreateContainer within sandbox \"b9d62ba0ad8864038bb04ca2bbf0b4dabf318725e85296597c5e74c36ef6bec7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:33:41.387044 containerd[1462]: time="2026-04-17T23:33:41.386969812Z" level=info msg="CreateContainer within sandbox \"b9d62ba0ad8864038bb04ca2bbf0b4dabf318725e85296597c5e74c36ef6bec7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2f733ced6b0fc632cf35005bd937f809724ed9444974f1efe29c3dd9aa91e0a0\"" Apr 17 23:33:41.387484 containerd[1462]: time="2026-04-17T23:33:41.387445793Z" level=info msg="StartContainer for \"2f733ced6b0fc632cf35005bd937f809724ed9444974f1efe29c3dd9aa91e0a0\"" Apr 17 23:33:41.414349 systemd[1]: Started cri-containerd-2f733ced6b0fc632cf35005bd937f809724ed9444974f1efe29c3dd9aa91e0a0.scope - libcontainer container 2f733ced6b0fc632cf35005bd937f809724ed9444974f1efe29c3dd9aa91e0a0. Apr 17 23:33:41.445644 containerd[1462]: time="2026-04-17T23:33:41.445570884Z" level=info msg="StartContainer for \"2f733ced6b0fc632cf35005bd937f809724ed9444974f1efe29c3dd9aa91e0a0\" returns successfully" Apr 17 23:33:44.843764 kubelet[2504]: E0417 23:33:44.843722 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:44.856360 kubelet[2504]: I0417 23:33:44.856195 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-6sc4x" podStartSLOduration=4.674647373 podStartE2EDuration="8.856174996s" podCreationTimestamp="2026-04-17 23:33:36 +0000 UTC" firstStartedPulling="2026-04-17 23:33:37.192568566 +0000 UTC m=+8.141842172" lastFinishedPulling="2026-04-17 23:33:41.374096191 +0000 UTC m=+12.323369795" observedRunningTime="2026-04-17 23:33:42.191181444 +0000 UTC m=+13.140455057" watchObservedRunningTime="2026-04-17 23:33:44.856174996 +0000 UTC m=+15.805448598" Apr 17 23:33:45.187543 kubelet[2504]: E0417 23:33:45.187405 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:46.442532 sudo[1636]: pam_unix(sudo:session): session closed for user root Apr 17 23:33:46.444242 sshd[1633]: pam_unix(sshd:session): session closed for user core Apr 17 23:33:46.448418 systemd[1]: sshd@6-10.0.0.43:22-10.0.0.1:36992.service: Deactivated successfully. Apr 17 23:33:46.451343 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:33:46.451530 systemd[1]: session-7.scope: Consumed 4.704s CPU time, 156.6M memory peak, 0B memory swap peak. Apr 17 23:33:46.452098 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:33:46.453386 systemd-logind[1440]: Removed session 7. Apr 17 23:33:47.852753 kubelet[2504]: I0417 23:33:47.852696 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrxv5\" (UniqueName: \"kubernetes.io/projected/4bf19dc8-bdbf-4338-a6d4-5eac90267b48-kube-api-access-mrxv5\") pod \"calico-typha-7486755d84-jql9f\" (UID: \"4bf19dc8-bdbf-4338-a6d4-5eac90267b48\") " pod="calico-system/calico-typha-7486755d84-jql9f" Apr 17 23:33:47.852753 kubelet[2504]: I0417 23:33:47.852755 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bf19dc8-bdbf-4338-a6d4-5eac90267b48-tigera-ca-bundle\") pod \"calico-typha-7486755d84-jql9f\" (UID: \"4bf19dc8-bdbf-4338-a6d4-5eac90267b48\") " pod="calico-system/calico-typha-7486755d84-jql9f" Apr 17 23:33:47.853426 kubelet[2504]: I0417 23:33:47.852776 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4bf19dc8-bdbf-4338-a6d4-5eac90267b48-typha-certs\") pod \"calico-typha-7486755d84-jql9f\" (UID: \"4bf19dc8-bdbf-4338-a6d4-5eac90267b48\") " pod="calico-system/calico-typha-7486755d84-jql9f" Apr 17 23:33:47.857157 systemd[1]: Created slice kubepods-besteffort-pod4bf19dc8_bdbf_4338_a6d4_5eac90267b48.slice - libcontainer container kubepods-besteffort-pod4bf19dc8_bdbf_4338_a6d4_5eac90267b48.slice. Apr 17 23:33:47.935360 systemd[1]: Created slice kubepods-besteffort-podd5ca4dd7_7a2b_4b82_ad12_957513aa1b9d.slice - libcontainer container kubepods-besteffort-podd5ca4dd7_7a2b_4b82_ad12_957513aa1b9d.slice. Apr 17 23:33:47.953727 kubelet[2504]: I0417 23:33:47.953649 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d-bpffs\") pod \"calico-node-k8lqj\" (UID: \"d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d\") " pod="calico-system/calico-node-k8lqj" Apr 17 23:33:47.953727 kubelet[2504]: I0417 23:33:47.953706 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d-lib-modules\") pod \"calico-node-k8lqj\" (UID: \"d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d\") " pod="calico-system/calico-node-k8lqj" Apr 17 23:33:47.953727 kubelet[2504]: I0417 23:33:47.953721 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d-sys-fs\") pod \"calico-node-k8lqj\" (UID: \"d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d\") " pod="calico-system/calico-node-k8lqj" Apr 17 23:33:47.953727 kubelet[2504]: I0417 23:33:47.953736 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96fzl\" (UniqueName: \"kubernetes.io/projected/d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d-kube-api-access-96fzl\") pod \"calico-node-k8lqj\" (UID: \"d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d\") " pod="calico-system/calico-node-k8lqj" Apr 17 23:33:47.953727 kubelet[2504]: I0417 23:33:47.953748 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d-policysync\") pod \"calico-node-k8lqj\" (UID: \"d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d\") " pod="calico-system/calico-node-k8lqj" Apr 17 23:33:47.954137 kubelet[2504]: I0417 23:33:47.953769 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d-cni-log-dir\") pod \"calico-node-k8lqj\" (UID: \"d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d\") " pod="calico-system/calico-node-k8lqj" Apr 17 23:33:47.954137 kubelet[2504]: I0417 23:33:47.953960 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d-var-lib-calico\") pod \"calico-node-k8lqj\" (UID: \"d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d\") " pod="calico-system/calico-node-k8lqj" Apr 17 23:33:47.954137 kubelet[2504]: I0417 23:33:47.953988 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d-xtables-lock\") pod \"calico-node-k8lqj\" (UID: \"d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d\") " pod="calico-system/calico-node-k8lqj" Apr 17 23:33:47.954137 kubelet[2504]: I0417 23:33:47.954062 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d-nodeproc\") pod \"calico-node-k8lqj\" (UID: \"d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d\") " pod="calico-system/calico-node-k8lqj" Apr 17 23:33:47.954137 kubelet[2504]: I0417 23:33:47.954079 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d-cni-bin-dir\") pod \"calico-node-k8lqj\" (UID: \"d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d\") " pod="calico-system/calico-node-k8lqj" Apr 17 23:33:47.954276 kubelet[2504]: I0417 23:33:47.954096 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d-var-run-calico\") pod \"calico-node-k8lqj\" (UID: \"d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d\") " pod="calico-system/calico-node-k8lqj" Apr 17 23:33:47.954276 kubelet[2504]: I0417 23:33:47.954110 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d-cni-net-dir\") pod \"calico-node-k8lqj\" (UID: \"d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d\") " pod="calico-system/calico-node-k8lqj" Apr 17 23:33:47.954276 kubelet[2504]: I0417 23:33:47.954123 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d-flexvol-driver-host\") pod \"calico-node-k8lqj\" (UID: \"d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d\") " pod="calico-system/calico-node-k8lqj" Apr 17 23:33:47.954276 kubelet[2504]: I0417 23:33:47.954140 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d-node-certs\") pod \"calico-node-k8lqj\" (UID: \"d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d\") " pod="calico-system/calico-node-k8lqj" Apr 17 23:33:47.954276 kubelet[2504]: I0417 23:33:47.954153 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d-tigera-ca-bundle\") pod \"calico-node-k8lqj\" (UID: \"d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d\") " pod="calico-system/calico-node-k8lqj" Apr 17 23:33:48.025961 kubelet[2504]: E0417 23:33:48.025889 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-99tqr" podUID="ca6b2b6e-bb01-4db2-9121-3bab00f81e9d" Apr 17 23:33:48.054901 kubelet[2504]: I0417 23:33:48.054860 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2rq9\" (UniqueName: \"kubernetes.io/projected/ca6b2b6e-bb01-4db2-9121-3bab00f81e9d-kube-api-access-l2rq9\") pod \"csi-node-driver-99tqr\" (UID: \"ca6b2b6e-bb01-4db2-9121-3bab00f81e9d\") " pod="calico-system/csi-node-driver-99tqr" Apr 17 23:33:48.055071 kubelet[2504]: I0417 23:33:48.054983 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ca6b2b6e-bb01-4db2-9121-3bab00f81e9d-socket-dir\") pod \"csi-node-driver-99tqr\" (UID: \"ca6b2b6e-bb01-4db2-9121-3bab00f81e9d\") " pod="calico-system/csi-node-driver-99tqr" Apr 17 23:33:48.055071 kubelet[2504]: I0417 23:33:48.055058 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ca6b2b6e-bb01-4db2-9121-3bab00f81e9d-registration-dir\") pod \"csi-node-driver-99tqr\" (UID: \"ca6b2b6e-bb01-4db2-9121-3bab00f81e9d\") " pod="calico-system/csi-node-driver-99tqr" Apr 17 23:33:48.055173 kubelet[2504]: I0417 23:33:48.055153 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ca6b2b6e-bb01-4db2-9121-3bab00f81e9d-kubelet-dir\") pod \"csi-node-driver-99tqr\" (UID: \"ca6b2b6e-bb01-4db2-9121-3bab00f81e9d\") " pod="calico-system/csi-node-driver-99tqr" Apr 17 23:33:48.055220 kubelet[2504]: I0417 23:33:48.055187 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ca6b2b6e-bb01-4db2-9121-3bab00f81e9d-varrun\") pod \"csi-node-driver-99tqr\" (UID: \"ca6b2b6e-bb01-4db2-9121-3bab00f81e9d\") " pod="calico-system/csi-node-driver-99tqr" Apr 17 23:33:48.056950 kubelet[2504]: E0417 23:33:48.056927 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.056950 kubelet[2504]: W0417 23:33:48.056948 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.057077 kubelet[2504]: E0417 23:33:48.056964 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.060024 kubelet[2504]: E0417 23:33:48.058718 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.060024 kubelet[2504]: W0417 23:33:48.058734 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.060024 kubelet[2504]: E0417 23:33:48.058755 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.064691 kubelet[2504]: E0417 23:33:48.064605 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.064691 kubelet[2504]: W0417 23:33:48.064620 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.064691 kubelet[2504]: E0417 23:33:48.064630 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.156017 kubelet[2504]: E0417 23:33:48.155879 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.156017 kubelet[2504]: W0417 23:33:48.155904 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.156017 kubelet[2504]: E0417 23:33:48.155925 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.156167 kubelet[2504]: E0417 23:33:48.156115 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.156167 kubelet[2504]: W0417 23:33:48.156120 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.156167 kubelet[2504]: E0417 23:33:48.156127 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.156286 kubelet[2504]: E0417 23:33:48.156251 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.156286 kubelet[2504]: W0417 23:33:48.156255 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.156286 kubelet[2504]: E0417 23:33:48.156261 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.156551 kubelet[2504]: E0417 23:33:48.156535 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.156551 kubelet[2504]: W0417 23:33:48.156550 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.156606 kubelet[2504]: E0417 23:33:48.156557 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.156815 kubelet[2504]: E0417 23:33:48.156798 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.156815 kubelet[2504]: W0417 23:33:48.156809 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.156815 kubelet[2504]: E0417 23:33:48.156821 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.157180 kubelet[2504]: E0417 23:33:48.157157 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.157180 kubelet[2504]: W0417 23:33:48.157179 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.157245 kubelet[2504]: E0417 23:33:48.157189 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.157408 kubelet[2504]: E0417 23:33:48.157384 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.157408 kubelet[2504]: W0417 23:33:48.157402 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.157446 kubelet[2504]: E0417 23:33:48.157409 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.157674 kubelet[2504]: E0417 23:33:48.157643 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.157674 kubelet[2504]: W0417 23:33:48.157671 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.157716 kubelet[2504]: E0417 23:33:48.157678 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.157853 kubelet[2504]: E0417 23:33:48.157840 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.157853 kubelet[2504]: W0417 23:33:48.157851 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.157891 kubelet[2504]: E0417 23:33:48.157858 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.158088 kubelet[2504]: E0417 23:33:48.158076 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.158112 kubelet[2504]: W0417 23:33:48.158088 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.158112 kubelet[2504]: E0417 23:33:48.158094 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.158266 kubelet[2504]: E0417 23:33:48.158253 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.158266 kubelet[2504]: W0417 23:33:48.158264 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.158300 kubelet[2504]: E0417 23:33:48.158270 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.158452 kubelet[2504]: E0417 23:33:48.158434 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.158452 kubelet[2504]: W0417 23:33:48.158446 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.158452 kubelet[2504]: E0417 23:33:48.158451 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.158643 kubelet[2504]: E0417 23:33:48.158630 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.158711 kubelet[2504]: W0417 23:33:48.158643 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.158711 kubelet[2504]: E0417 23:33:48.158650 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.158872 kubelet[2504]: E0417 23:33:48.158857 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.158872 kubelet[2504]: W0417 23:33:48.158869 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.158919 kubelet[2504]: E0417 23:33:48.158874 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.159135 kubelet[2504]: E0417 23:33:48.159118 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.159161 kubelet[2504]: W0417 23:33:48.159136 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.159161 kubelet[2504]: E0417 23:33:48.159148 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.159325 kubelet[2504]: E0417 23:33:48.159312 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.159325 kubelet[2504]: W0417 23:33:48.159324 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.159363 kubelet[2504]: E0417 23:33:48.159331 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.159533 kubelet[2504]: E0417 23:33:48.159501 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.159533 kubelet[2504]: W0417 23:33:48.159517 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.159533 kubelet[2504]: E0417 23:33:48.159525 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.159709 kubelet[2504]: E0417 23:33:48.159685 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:48.159975 kubelet[2504]: E0417 23:33:48.159945 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.159975 kubelet[2504]: W0417 23:33:48.159965 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.160050 kubelet[2504]: E0417 23:33:48.160025 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.160184 containerd[1462]: time="2026-04-17T23:33:48.160144611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7486755d84-jql9f,Uid:4bf19dc8-bdbf-4338-a6d4-5eac90267b48,Namespace:calico-system,Attempt:0,}" Apr 17 23:33:48.160587 kubelet[2504]: E0417 23:33:48.160239 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.160587 kubelet[2504]: W0417 23:33:48.160245 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.160587 kubelet[2504]: E0417 23:33:48.160251 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.160587 kubelet[2504]: E0417 23:33:48.160545 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.160587 kubelet[2504]: W0417 23:33:48.160555 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.160587 kubelet[2504]: E0417 23:33:48.160563 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.160875 kubelet[2504]: E0417 23:33:48.160860 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.160875 kubelet[2504]: W0417 23:33:48.160874 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.160943 kubelet[2504]: E0417 23:33:48.160881 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.161198 kubelet[2504]: E0417 23:33:48.161067 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.161198 kubelet[2504]: W0417 23:33:48.161077 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.161198 kubelet[2504]: E0417 23:33:48.161113 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.161385 kubelet[2504]: E0417 23:33:48.161370 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.161385 kubelet[2504]: W0417 23:33:48.161384 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.161458 kubelet[2504]: E0417 23:33:48.161392 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.161628 kubelet[2504]: E0417 23:33:48.161610 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.161628 kubelet[2504]: W0417 23:33:48.161625 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.161733 kubelet[2504]: E0417 23:33:48.161634 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.161848 kubelet[2504]: E0417 23:33:48.161822 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.161848 kubelet[2504]: W0417 23:33:48.161848 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.161962 kubelet[2504]: E0417 23:33:48.161857 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.173308 kubelet[2504]: E0417 23:33:48.173287 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:48.173308 kubelet[2504]: W0417 23:33:48.173306 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:48.173401 kubelet[2504]: E0417 23:33:48.173320 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:48.184697 containerd[1462]: time="2026-04-17T23:33:48.184517669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:33:48.184847 containerd[1462]: time="2026-04-17T23:33:48.184720939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:33:48.184847 containerd[1462]: time="2026-04-17T23:33:48.184731628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:33:48.184946 containerd[1462]: time="2026-04-17T23:33:48.184906737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:33:48.203201 systemd[1]: Started cri-containerd-3f88fbaebc12f27e123b7c679e358986cf4e7db5e62053b46a6682f326fe7bc4.scope - libcontainer container 3f88fbaebc12f27e123b7c679e358986cf4e7db5e62053b46a6682f326fe7bc4. Apr 17 23:33:48.240077 containerd[1462]: time="2026-04-17T23:33:48.239903469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k8lqj,Uid:d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d,Namespace:calico-system,Attempt:0,}" Apr 17 23:33:48.245220 containerd[1462]: time="2026-04-17T23:33:48.245159290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7486755d84-jql9f,Uid:4bf19dc8-bdbf-4338-a6d4-5eac90267b48,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f88fbaebc12f27e123b7c679e358986cf4e7db5e62053b46a6682f326fe7bc4\"" Apr 17 23:33:48.246188 kubelet[2504]: E0417 23:33:48.246161 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:48.247232 containerd[1462]: time="2026-04-17T23:33:48.247203265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:33:48.266988 containerd[1462]: time="2026-04-17T23:33:48.266400780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:33:48.266988 containerd[1462]: time="2026-04-17T23:33:48.266507287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:33:48.266988 containerd[1462]: time="2026-04-17T23:33:48.266518415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:33:48.266988 containerd[1462]: time="2026-04-17T23:33:48.266641950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:33:48.288235 systemd[1]: Started cri-containerd-bce8daf084ec6ef1759d2766b9391a1704c8fc30b750c9b24a1c2489b33cd9eb.scope - libcontainer container bce8daf084ec6ef1759d2766b9391a1704c8fc30b750c9b24a1c2489b33cd9eb. Apr 17 23:33:48.304453 containerd[1462]: time="2026-04-17T23:33:48.304400831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-k8lqj,Uid:d5ca4dd7-7a2b-4b82-ad12-957513aa1b9d,Namespace:calico-system,Attempt:0,} returns sandbox id \"bce8daf084ec6ef1759d2766b9391a1704c8fc30b750c9b24a1c2489b33cd9eb\"" Apr 17 23:33:49.782731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount703872765.mount: Deactivated successfully. Apr 17 23:33:50.102848 containerd[1462]: time="2026-04-17T23:33:50.102709704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:50.103788 containerd[1462]: time="2026-04-17T23:33:50.103622634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 17 23:33:50.104681 containerd[1462]: time="2026-04-17T23:33:50.104611522Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:50.106546 containerd[1462]: time="2026-04-17T23:33:50.106501263Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:50.107061 containerd[1462]: time="2026-04-17T23:33:50.107044074Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 1.859805942s" Apr 17 23:33:50.107099 containerd[1462]: time="2026-04-17T23:33:50.107066119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 17 23:33:50.108925 containerd[1462]: time="2026-04-17T23:33:50.107864325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:33:50.118560 containerd[1462]: time="2026-04-17T23:33:50.118506554Z" level=info msg="CreateContainer within sandbox \"3f88fbaebc12f27e123b7c679e358986cf4e7db5e62053b46a6682f326fe7bc4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:33:50.129489 containerd[1462]: time="2026-04-17T23:33:50.129429328Z" level=info msg="CreateContainer within sandbox \"3f88fbaebc12f27e123b7c679e358986cf4e7db5e62053b46a6682f326fe7bc4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b611d039ae2ee19fc5bb85641d2c07b7962c2c1e11afb46226e307c2805b2be7\"" Apr 17 23:33:50.130786 containerd[1462]: time="2026-04-17T23:33:50.129897618Z" level=info msg="StartContainer for \"b611d039ae2ee19fc5bb85641d2c07b7962c2c1e11afb46226e307c2805b2be7\"" Apr 17 23:33:50.133873 kubelet[2504]: E0417 23:33:50.133818 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-99tqr" podUID="ca6b2b6e-bb01-4db2-9121-3bab00f81e9d" Apr 17 23:33:50.157196 systemd[1]: Started cri-containerd-b611d039ae2ee19fc5bb85641d2c07b7962c2c1e11afb46226e307c2805b2be7.scope - libcontainer container b611d039ae2ee19fc5bb85641d2c07b7962c2c1e11afb46226e307c2805b2be7. Apr 17 23:33:50.191222 containerd[1462]: time="2026-04-17T23:33:50.191182900Z" level=info msg="StartContainer for \"b611d039ae2ee19fc5bb85641d2c07b7962c2c1e11afb46226e307c2805b2be7\" returns successfully" Apr 17 23:33:50.203602 kubelet[2504]: E0417 23:33:50.203486 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:50.213681 kubelet[2504]: I0417 23:33:50.213615 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7486755d84-jql9f" podStartSLOduration=1.352847896 podStartE2EDuration="3.213599114s" podCreationTimestamp="2026-04-17 23:33:47 +0000 UTC" firstStartedPulling="2026-04-17 23:33:48.2468762 +0000 UTC m=+19.196149801" lastFinishedPulling="2026-04-17 23:33:50.107627414 +0000 UTC m=+21.056901019" observedRunningTime="2026-04-17 23:33:50.213165738 +0000 UTC m=+21.162439350" watchObservedRunningTime="2026-04-17 23:33:50.213599114 +0000 UTC m=+21.162872726" Apr 17 23:33:50.269185 kubelet[2504]: E0417 23:33:50.269161 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.269668 kubelet[2504]: W0417 23:33:50.269297 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.269668 kubelet[2504]: E0417 23:33:50.269319 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.270226 kubelet[2504]: E0417 23:33:50.269957 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.270226 kubelet[2504]: W0417 23:33:50.269972 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.270226 kubelet[2504]: E0417 23:33:50.269988 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.270614 kubelet[2504]: E0417 23:33:50.270563 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.270614 kubelet[2504]: W0417 23:33:50.270572 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.270614 kubelet[2504]: E0417 23:33:50.270580 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.271487 kubelet[2504]: E0417 23:33:50.271181 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.271487 kubelet[2504]: W0417 23:33:50.271189 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.271487 kubelet[2504]: E0417 23:33:50.271198 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.272076 kubelet[2504]: E0417 23:33:50.272065 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.272166 kubelet[2504]: W0417 23:33:50.272128 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.272166 kubelet[2504]: E0417 23:33:50.272138 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.274221 kubelet[2504]: E0417 23:33:50.274134 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.274221 kubelet[2504]: W0417 23:33:50.274145 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.274221 kubelet[2504]: E0417 23:33:50.274153 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.274380 kubelet[2504]: E0417 23:33:50.274374 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.274417 kubelet[2504]: W0417 23:33:50.274412 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.274475 kubelet[2504]: E0417 23:33:50.274446 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.274584 kubelet[2504]: E0417 23:33:50.274579 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.274684 kubelet[2504]: W0417 23:33:50.274612 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.274684 kubelet[2504]: E0417 23:33:50.274619 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.274810 kubelet[2504]: E0417 23:33:50.274805 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.274841 kubelet[2504]: W0417 23:33:50.274836 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.274874 kubelet[2504]: E0417 23:33:50.274868 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.275215 kubelet[2504]: E0417 23:33:50.275171 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.275456 kubelet[2504]: W0417 23:33:50.275357 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.275456 kubelet[2504]: E0417 23:33:50.275376 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.277103 kubelet[2504]: E0417 23:33:50.277027 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.277103 kubelet[2504]: W0417 23:33:50.277038 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.277103 kubelet[2504]: E0417 23:33:50.277046 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.277620 kubelet[2504]: E0417 23:33:50.277553 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.277620 kubelet[2504]: W0417 23:33:50.277561 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.277620 kubelet[2504]: E0417 23:33:50.277569 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.279538 kubelet[2504]: E0417 23:33:50.279495 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.279538 kubelet[2504]: W0417 23:33:50.279504 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.279538 kubelet[2504]: E0417 23:33:50.279512 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.284559 kubelet[2504]: E0417 23:33:50.284459 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.284559 kubelet[2504]: W0417 23:33:50.284472 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.284559 kubelet[2504]: E0417 23:33:50.284483 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.285697 kubelet[2504]: E0417 23:33:50.285325 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.285697 kubelet[2504]: W0417 23:33:50.285406 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.285697 kubelet[2504]: E0417 23:33:50.285418 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.378676 kubelet[2504]: E0417 23:33:50.378491 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.378676 kubelet[2504]: W0417 23:33:50.378521 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.378676 kubelet[2504]: E0417 23:33:50.378543 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.378904 kubelet[2504]: E0417 23:33:50.378806 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.378904 kubelet[2504]: W0417 23:33:50.378814 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.378904 kubelet[2504]: E0417 23:33:50.378823 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.379144 kubelet[2504]: E0417 23:33:50.379096 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.379144 kubelet[2504]: W0417 23:33:50.379115 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.379144 kubelet[2504]: E0417 23:33:50.379124 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.379424 kubelet[2504]: E0417 23:33:50.379401 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.379424 kubelet[2504]: W0417 23:33:50.379423 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.379482 kubelet[2504]: E0417 23:33:50.379435 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.379635 kubelet[2504]: E0417 23:33:50.379622 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.379635 kubelet[2504]: W0417 23:33:50.379628 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.379635 kubelet[2504]: E0417 23:33:50.379633 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.379776 kubelet[2504]: E0417 23:33:50.379758 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.379776 kubelet[2504]: W0417 23:33:50.379778 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.379776 kubelet[2504]: E0417 23:33:50.379785 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.379925 kubelet[2504]: E0417 23:33:50.379916 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.379947 kubelet[2504]: W0417 23:33:50.379924 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.379947 kubelet[2504]: E0417 23:33:50.379934 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.380424 kubelet[2504]: E0417 23:33:50.380294 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.380424 kubelet[2504]: W0417 23:33:50.380307 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.380424 kubelet[2504]: E0417 23:33:50.380317 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.380709 kubelet[2504]: E0417 23:33:50.380687 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.380734 kubelet[2504]: W0417 23:33:50.380709 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.380734 kubelet[2504]: E0417 23:33:50.380722 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.380952 kubelet[2504]: E0417 23:33:50.380939 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.380975 kubelet[2504]: W0417 23:33:50.380952 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.380975 kubelet[2504]: E0417 23:33:50.380959 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.381180 kubelet[2504]: E0417 23:33:50.381167 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.381180 kubelet[2504]: W0417 23:33:50.381179 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.381218 kubelet[2504]: E0417 23:33:50.381185 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.381476 kubelet[2504]: E0417 23:33:50.381442 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.381476 kubelet[2504]: W0417 23:33:50.381462 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.381476 kubelet[2504]: E0417 23:33:50.381470 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.381699 kubelet[2504]: E0417 23:33:50.381674 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.381699 kubelet[2504]: W0417 23:33:50.381688 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.381699 kubelet[2504]: E0417 23:33:50.381695 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.381880 kubelet[2504]: E0417 23:33:50.381868 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.381902 kubelet[2504]: W0417 23:33:50.381880 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.381902 kubelet[2504]: E0417 23:33:50.381886 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.382076 kubelet[2504]: E0417 23:33:50.382064 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.382076 kubelet[2504]: W0417 23:33:50.382075 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.382118 kubelet[2504]: E0417 23:33:50.382081 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.382287 kubelet[2504]: E0417 23:33:50.382263 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.382287 kubelet[2504]: W0417 23:33:50.382277 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.382287 kubelet[2504]: E0417 23:33:50.382282 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.382539 kubelet[2504]: E0417 23:33:50.382524 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.382539 kubelet[2504]: W0417 23:33:50.382538 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.382582 kubelet[2504]: E0417 23:33:50.382545 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:50.382741 kubelet[2504]: E0417 23:33:50.382726 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:50.382741 kubelet[2504]: W0417 23:33:50.382738 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:50.382783 kubelet[2504]: E0417 23:33:50.382743 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.203224 update_engine[1445]: I20260417 23:33:51.203119 1445 update_attempter.cc:509] Updating boot flags... Apr 17 23:33:51.206138 kubelet[2504]: I0417 23:33:51.206113 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:33:51.206416 kubelet[2504]: E0417 23:33:51.206394 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:51.224087 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (3135) Apr 17 23:33:51.259085 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 32 scanned by (udev-worker) (3136) Apr 17 23:33:51.291766 kubelet[2504]: E0417 23:33:51.291728 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.291766 kubelet[2504]: W0417 23:33:51.291753 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.291766 kubelet[2504]: E0417 23:33:51.291775 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.291977 kubelet[2504]: E0417 23:33:51.291959 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.291977 kubelet[2504]: W0417 23:33:51.291966 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.291977 kubelet[2504]: E0417 23:33:51.291973 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.292262 kubelet[2504]: E0417 23:33:51.292228 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.292262 kubelet[2504]: W0417 23:33:51.292236 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.292262 kubelet[2504]: E0417 23:33:51.292243 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.292415 kubelet[2504]: E0417 23:33:51.292387 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.292415 kubelet[2504]: W0417 23:33:51.292393 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.292415 kubelet[2504]: E0417 23:33:51.292398 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.292633 kubelet[2504]: E0417 23:33:51.292607 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.292633 kubelet[2504]: W0417 23:33:51.292620 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.292633 kubelet[2504]: E0417 23:33:51.292627 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.292860 kubelet[2504]: E0417 23:33:51.292826 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.292860 kubelet[2504]: W0417 23:33:51.292838 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.292860 kubelet[2504]: E0417 23:33:51.292844 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.293100 kubelet[2504]: E0417 23:33:51.293076 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.293100 kubelet[2504]: W0417 23:33:51.293088 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.293100 kubelet[2504]: E0417 23:33:51.293094 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.293291 kubelet[2504]: E0417 23:33:51.293266 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.293291 kubelet[2504]: W0417 23:33:51.293277 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.293291 kubelet[2504]: E0417 23:33:51.293282 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.293448 kubelet[2504]: E0417 23:33:51.293424 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.293448 kubelet[2504]: W0417 23:33:51.293435 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.293448 kubelet[2504]: E0417 23:33:51.293441 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.293708 kubelet[2504]: E0417 23:33:51.293674 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.293708 kubelet[2504]: W0417 23:33:51.293694 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.293708 kubelet[2504]: E0417 23:33:51.293705 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.293929 kubelet[2504]: E0417 23:33:51.293903 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.293929 kubelet[2504]: W0417 23:33:51.293915 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.293929 kubelet[2504]: E0417 23:33:51.293921 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.294120 kubelet[2504]: E0417 23:33:51.294094 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.294120 kubelet[2504]: W0417 23:33:51.294108 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.294120 kubelet[2504]: E0417 23:33:51.294118 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.294289 kubelet[2504]: E0417 23:33:51.294264 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.294289 kubelet[2504]: W0417 23:33:51.294276 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.294289 kubelet[2504]: E0417 23:33:51.294281 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.294438 kubelet[2504]: E0417 23:33:51.294414 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.294438 kubelet[2504]: W0417 23:33:51.294425 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.294438 kubelet[2504]: E0417 23:33:51.294430 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.294588 kubelet[2504]: E0417 23:33:51.294564 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.294588 kubelet[2504]: W0417 23:33:51.294575 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.294588 kubelet[2504]: E0417 23:33:51.294580 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.386639 kubelet[2504]: E0417 23:33:51.386547 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.386639 kubelet[2504]: W0417 23:33:51.386576 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.386639 kubelet[2504]: E0417 23:33:51.386598 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.387590 kubelet[2504]: E0417 23:33:51.387527 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.387590 kubelet[2504]: W0417 23:33:51.387556 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.387590 kubelet[2504]: E0417 23:33:51.387578 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.387884 kubelet[2504]: E0417 23:33:51.387870 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.387884 kubelet[2504]: W0417 23:33:51.387883 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.387942 kubelet[2504]: E0417 23:33:51.387890 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.388195 kubelet[2504]: E0417 23:33:51.388176 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.388220 kubelet[2504]: W0417 23:33:51.388197 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.388220 kubelet[2504]: E0417 23:33:51.388210 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.388418 kubelet[2504]: E0417 23:33:51.388392 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.388418 kubelet[2504]: W0417 23:33:51.388405 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.388418 kubelet[2504]: E0417 23:33:51.388412 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.388673 kubelet[2504]: E0417 23:33:51.388629 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.388673 kubelet[2504]: W0417 23:33:51.388659 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.388673 kubelet[2504]: E0417 23:33:51.388667 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.388932 kubelet[2504]: E0417 23:33:51.388907 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.388932 kubelet[2504]: W0417 23:33:51.388926 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.388970 kubelet[2504]: E0417 23:33:51.388937 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.389188 kubelet[2504]: E0417 23:33:51.389175 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.389188 kubelet[2504]: W0417 23:33:51.389187 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.389223 kubelet[2504]: E0417 23:33:51.389194 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.389359 kubelet[2504]: E0417 23:33:51.389346 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.389359 kubelet[2504]: W0417 23:33:51.389359 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.389396 kubelet[2504]: E0417 23:33:51.389364 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.389574 kubelet[2504]: E0417 23:33:51.389551 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.389574 kubelet[2504]: W0417 23:33:51.389568 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.389574 kubelet[2504]: E0417 23:33:51.389579 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.389820 kubelet[2504]: E0417 23:33:51.389803 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.389820 kubelet[2504]: W0417 23:33:51.389819 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.389853 kubelet[2504]: E0417 23:33:51.389825 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.390028 kubelet[2504]: E0417 23:33:51.390015 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.390028 kubelet[2504]: W0417 23:33:51.390027 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.390071 kubelet[2504]: E0417 23:33:51.390033 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.390324 kubelet[2504]: E0417 23:33:51.390273 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.390324 kubelet[2504]: W0417 23:33:51.390289 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.390324 kubelet[2504]: E0417 23:33:51.390297 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.390464 kubelet[2504]: E0417 23:33:51.390451 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.390464 kubelet[2504]: W0417 23:33:51.390464 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.390499 kubelet[2504]: E0417 23:33:51.390470 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.390695 kubelet[2504]: E0417 23:33:51.390680 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.390695 kubelet[2504]: W0417 23:33:51.390692 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.390732 kubelet[2504]: E0417 23:33:51.390698 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.390863 kubelet[2504]: E0417 23:33:51.390850 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.390863 kubelet[2504]: W0417 23:33:51.390863 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.390895 kubelet[2504]: E0417 23:33:51.390868 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.391048 kubelet[2504]: E0417 23:33:51.391035 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.391048 kubelet[2504]: W0417 23:33:51.391047 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.391087 kubelet[2504]: E0417 23:33:51.391052 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.391242 kubelet[2504]: E0417 23:33:51.391230 2504 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:33:51.391242 kubelet[2504]: W0417 23:33:51.391241 2504 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:33:51.391276 kubelet[2504]: E0417 23:33:51.391247 2504 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:33:51.447736 containerd[1462]: time="2026-04-17T23:33:51.447688797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:51.448929 containerd[1462]: time="2026-04-17T23:33:51.448877744Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 17 23:33:51.450173 containerd[1462]: time="2026-04-17T23:33:51.450137876Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:51.452296 containerd[1462]: time="2026-04-17T23:33:51.452250728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:51.452848 containerd[1462]: time="2026-04-17T23:33:51.452806646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.344918955s" Apr 17 23:33:51.452869 containerd[1462]: time="2026-04-17T23:33:51.452850809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 17 23:33:51.457340 containerd[1462]: time="2026-04-17T23:33:51.457180701Z" level=info msg="CreateContainer within sandbox \"bce8daf084ec6ef1759d2766b9391a1704c8fc30b750c9b24a1c2489b33cd9eb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:33:51.471419 containerd[1462]: time="2026-04-17T23:33:51.471350310Z" level=info msg="CreateContainer within sandbox \"bce8daf084ec6ef1759d2766b9391a1704c8fc30b750c9b24a1c2489b33cd9eb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0f07bdfbbccdd2ccb9e2ab6aa2a48ad51422a7b3efd0d05a9fa54c3053bccb8f\"" Apr 17 23:33:51.472078 containerd[1462]: time="2026-04-17T23:33:51.472040820Z" level=info msg="StartContainer for \"0f07bdfbbccdd2ccb9e2ab6aa2a48ad51422a7b3efd0d05a9fa54c3053bccb8f\"" Apr 17 23:33:51.510254 systemd[1]: Started cri-containerd-0f07bdfbbccdd2ccb9e2ab6aa2a48ad51422a7b3efd0d05a9fa54c3053bccb8f.scope - libcontainer container 0f07bdfbbccdd2ccb9e2ab6aa2a48ad51422a7b3efd0d05a9fa54c3053bccb8f. Apr 17 23:33:51.533113 containerd[1462]: time="2026-04-17T23:33:51.533067477Z" level=info msg="StartContainer for \"0f07bdfbbccdd2ccb9e2ab6aa2a48ad51422a7b3efd0d05a9fa54c3053bccb8f\" returns successfully" Apr 17 23:33:51.540131 systemd[1]: cri-containerd-0f07bdfbbccdd2ccb9e2ab6aa2a48ad51422a7b3efd0d05a9fa54c3053bccb8f.scope: Deactivated successfully. Apr 17 23:33:51.560759 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0f07bdfbbccdd2ccb9e2ab6aa2a48ad51422a7b3efd0d05a9fa54c3053bccb8f-rootfs.mount: Deactivated successfully. Apr 17 23:33:51.643349 containerd[1462]: time="2026-04-17T23:33:51.641434438Z" level=info msg="shim disconnected" id=0f07bdfbbccdd2ccb9e2ab6aa2a48ad51422a7b3efd0d05a9fa54c3053bccb8f namespace=k8s.io Apr 17 23:33:51.643349 containerd[1462]: time="2026-04-17T23:33:51.643326831Z" level=warning msg="cleaning up after shim disconnected" id=0f07bdfbbccdd2ccb9e2ab6aa2a48ad51422a7b3efd0d05a9fa54c3053bccb8f namespace=k8s.io Apr 17 23:33:51.643349 containerd[1462]: time="2026-04-17T23:33:51.643337212Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:33:52.134452 kubelet[2504]: E0417 23:33:52.134389 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-99tqr" podUID="ca6b2b6e-bb01-4db2-9121-3bab00f81e9d" Apr 17 23:33:52.209946 containerd[1462]: time="2026-04-17T23:33:52.209879790Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:33:54.134350 kubelet[2504]: E0417 23:33:54.134296 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-99tqr" podUID="ca6b2b6e-bb01-4db2-9121-3bab00f81e9d" Apr 17 23:33:55.379786 kubelet[2504]: E0417 23:33:55.379760 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:55.678336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2883148390.mount: Deactivated successfully. Apr 17 23:33:55.710419 containerd[1462]: time="2026-04-17T23:33:55.710348563Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:55.711095 containerd[1462]: time="2026-04-17T23:33:55.711028850Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 17 23:33:55.712193 containerd[1462]: time="2026-04-17T23:33:55.712126030Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:55.716840 containerd[1462]: time="2026-04-17T23:33:55.716795873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:55.717365 containerd[1462]: time="2026-04-17T23:33:55.717313691Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 3.507280777s" Apr 17 23:33:55.717365 containerd[1462]: time="2026-04-17T23:33:55.717357828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 17 23:33:55.754801 containerd[1462]: time="2026-04-17T23:33:55.754568138Z" level=info msg="CreateContainer within sandbox \"bce8daf084ec6ef1759d2766b9391a1704c8fc30b750c9b24a1c2489b33cd9eb\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:33:55.822555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount505790528.mount: Deactivated successfully. Apr 17 23:33:55.832876 containerd[1462]: time="2026-04-17T23:33:55.832820976Z" level=info msg="CreateContainer within sandbox \"bce8daf084ec6ef1759d2766b9391a1704c8fc30b750c9b24a1c2489b33cd9eb\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"a78be77893e202c5113b6e1b346b5ee9a7786b15d573c1a7a0795c32792d6221\"" Apr 17 23:33:55.833560 containerd[1462]: time="2026-04-17T23:33:55.833541185Z" level=info msg="StartContainer for \"a78be77893e202c5113b6e1b346b5ee9a7786b15d573c1a7a0795c32792d6221\"" Apr 17 23:33:55.909305 systemd[1]: Started cri-containerd-a78be77893e202c5113b6e1b346b5ee9a7786b15d573c1a7a0795c32792d6221.scope - libcontainer container a78be77893e202c5113b6e1b346b5ee9a7786b15d573c1a7a0795c32792d6221. Apr 17 23:33:55.941191 containerd[1462]: time="2026-04-17T23:33:55.941077637Z" level=info msg="StartContainer for \"a78be77893e202c5113b6e1b346b5ee9a7786b15d573c1a7a0795c32792d6221\" returns successfully" Apr 17 23:33:55.982405 systemd[1]: cri-containerd-a78be77893e202c5113b6e1b346b5ee9a7786b15d573c1a7a0795c32792d6221.scope: Deactivated successfully. Apr 17 23:33:56.002076 containerd[1462]: time="2026-04-17T23:33:56.001978556Z" level=info msg="shim disconnected" id=a78be77893e202c5113b6e1b346b5ee9a7786b15d573c1a7a0795c32792d6221 namespace=k8s.io Apr 17 23:33:56.002076 containerd[1462]: time="2026-04-17T23:33:56.002064649Z" level=warning msg="cleaning up after shim disconnected" id=a78be77893e202c5113b6e1b346b5ee9a7786b15d573c1a7a0795c32792d6221 namespace=k8s.io Apr 17 23:33:56.002076 containerd[1462]: time="2026-04-17T23:33:56.002073496Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:33:56.135028 kubelet[2504]: E0417 23:33:56.134911 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-99tqr" podUID="ca6b2b6e-bb01-4db2-9121-3bab00f81e9d" Apr 17 23:33:56.222917 containerd[1462]: time="2026-04-17T23:33:56.222791834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:33:56.679779 systemd[1]: run-containerd-runc-k8s.io-a78be77893e202c5113b6e1b346b5ee9a7786b15d573c1a7a0795c32792d6221-runc.2g7MVk.mount: Deactivated successfully. Apr 17 23:33:56.679892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a78be77893e202c5113b6e1b346b5ee9a7786b15d573c1a7a0795c32792d6221-rootfs.mount: Deactivated successfully. Apr 17 23:33:58.133835 kubelet[2504]: E0417 23:33:58.133763 2504 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-99tqr" podUID="ca6b2b6e-bb01-4db2-9121-3bab00f81e9d" Apr 17 23:33:58.433438 containerd[1462]: time="2026-04-17T23:33:58.433224395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:58.434499 containerd[1462]: time="2026-04-17T23:33:58.434439095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 17 23:33:58.436061 containerd[1462]: time="2026-04-17T23:33:58.435986493Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:58.439486 containerd[1462]: time="2026-04-17T23:33:58.439256964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:33:58.440190 containerd[1462]: time="2026-04-17T23:33:58.440132953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 2.217294579s" Apr 17 23:33:58.440190 containerd[1462]: time="2026-04-17T23:33:58.440192351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 17 23:33:58.449254 containerd[1462]: time="2026-04-17T23:33:58.449201252Z" level=info msg="CreateContainer within sandbox \"bce8daf084ec6ef1759d2766b9391a1704c8fc30b750c9b24a1c2489b33cd9eb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:33:58.530740 containerd[1462]: time="2026-04-17T23:33:58.530622170Z" level=info msg="CreateContainer within sandbox \"bce8daf084ec6ef1759d2766b9391a1704c8fc30b750c9b24a1c2489b33cd9eb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5a335accd8b84d3ac02358fdcd72343dd2c89c32c19030fe590c13342c808298\"" Apr 17 23:33:58.531566 containerd[1462]: time="2026-04-17T23:33:58.531527389Z" level=info msg="StartContainer for \"5a335accd8b84d3ac02358fdcd72343dd2c89c32c19030fe590c13342c808298\"" Apr 17 23:33:58.569301 systemd[1]: Started cri-containerd-5a335accd8b84d3ac02358fdcd72343dd2c89c32c19030fe590c13342c808298.scope - libcontainer container 5a335accd8b84d3ac02358fdcd72343dd2c89c32c19030fe590c13342c808298. Apr 17 23:33:58.614054 containerd[1462]: time="2026-04-17T23:33:58.613933985Z" level=info msg="StartContainer for \"5a335accd8b84d3ac02358fdcd72343dd2c89c32c19030fe590c13342c808298\" returns successfully" Apr 17 23:33:59.097767 systemd[1]: cri-containerd-5a335accd8b84d3ac02358fdcd72343dd2c89c32c19030fe590c13342c808298.scope: Deactivated successfully. Apr 17 23:33:59.128564 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a335accd8b84d3ac02358fdcd72343dd2c89c32c19030fe590c13342c808298-rootfs.mount: Deactivated successfully. Apr 17 23:33:59.132281 containerd[1462]: time="2026-04-17T23:33:59.132173167Z" level=info msg="shim disconnected" id=5a335accd8b84d3ac02358fdcd72343dd2c89c32c19030fe590c13342c808298 namespace=k8s.io Apr 17 23:33:59.132281 containerd[1462]: time="2026-04-17T23:33:59.132238088Z" level=warning msg="cleaning up after shim disconnected" id=5a335accd8b84d3ac02358fdcd72343dd2c89c32c19030fe590c13342c808298 namespace=k8s.io Apr 17 23:33:59.132281 containerd[1462]: time="2026-04-17T23:33:59.132245971Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:33:59.185057 kubelet[2504]: I0417 23:33:59.185029 2504 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 17 23:33:59.244446 systemd[1]: Created slice kubepods-burstable-pod705e5c5c_1430_4917_b511_364bd6cc7cb4.slice - libcontainer container kubepods-burstable-pod705e5c5c_1430_4917_b511_364bd6cc7cb4.slice. Apr 17 23:33:59.256349 kubelet[2504]: I0417 23:33:59.256315 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4dd72ea7-b70c-42f1-9ae1-4082b989f41b-calico-apiserver-certs\") pod \"calico-apiserver-59fdff4ffb-2l7pk\" (UID: \"4dd72ea7-b70c-42f1-9ae1-4082b989f41b\") " pod="calico-system/calico-apiserver-59fdff4ffb-2l7pk" Apr 17 23:33:59.256349 kubelet[2504]: I0417 23:33:59.256350 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz7xh\" (UniqueName: \"kubernetes.io/projected/4dd72ea7-b70c-42f1-9ae1-4082b989f41b-kube-api-access-pz7xh\") pod \"calico-apiserver-59fdff4ffb-2l7pk\" (UID: \"4dd72ea7-b70c-42f1-9ae1-4082b989f41b\") " pod="calico-system/calico-apiserver-59fdff4ffb-2l7pk" Apr 17 23:33:59.256614 kubelet[2504]: I0417 23:33:59.256365 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab6d3304-1ee2-4cfe-846f-4d75ab580639-tigera-ca-bundle\") pod \"calico-kube-controllers-7c59d9f498-7krgp\" (UID: \"ab6d3304-1ee2-4cfe-846f-4d75ab580639\") " pod="calico-system/calico-kube-controllers-7c59d9f498-7krgp" Apr 17 23:33:59.256614 kubelet[2504]: I0417 23:33:59.256649 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/705e5c5c-1430-4917-b511-364bd6cc7cb4-config-volume\") pod \"coredns-674b8bbfcf-9l5kq\" (UID: \"705e5c5c-1430-4917-b511-364bd6cc7cb4\") " pod="kube-system/coredns-674b8bbfcf-9l5kq" Apr 17 23:33:59.256728 kubelet[2504]: I0417 23:33:59.256719 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw8wr\" (UniqueName: \"kubernetes.io/projected/ab6d3304-1ee2-4cfe-846f-4d75ab580639-kube-api-access-vw8wr\") pod \"calico-kube-controllers-7c59d9f498-7krgp\" (UID: \"ab6d3304-1ee2-4cfe-846f-4d75ab580639\") " pod="calico-system/calico-kube-controllers-7c59d9f498-7krgp" Apr 17 23:33:59.256747 kubelet[2504]: I0417 23:33:59.256737 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7c9f\" (UniqueName: \"kubernetes.io/projected/420b2c52-73ae-400b-a63b-96e0bedc89c9-kube-api-access-w7c9f\") pod \"goldmane-5b85766d88-wknrq\" (UID: \"420b2c52-73ae-400b-a63b-96e0bedc89c9\") " pod="calico-system/goldmane-5b85766d88-wknrq" Apr 17 23:33:59.256843 kubelet[2504]: I0417 23:33:59.256761 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8szwp\" (UniqueName: \"kubernetes.io/projected/705e5c5c-1430-4917-b511-364bd6cc7cb4-kube-api-access-8szwp\") pod \"coredns-674b8bbfcf-9l5kq\" (UID: \"705e5c5c-1430-4917-b511-364bd6cc7cb4\") " pod="kube-system/coredns-674b8bbfcf-9l5kq" Apr 17 23:33:59.256880 kubelet[2504]: I0417 23:33:59.256853 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/420b2c52-73ae-400b-a63b-96e0bedc89c9-config\") pod \"goldmane-5b85766d88-wknrq\" (UID: \"420b2c52-73ae-400b-a63b-96e0bedc89c9\") " pod="calico-system/goldmane-5b85766d88-wknrq" Apr 17 23:33:59.256880 kubelet[2504]: I0417 23:33:59.256867 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/420b2c52-73ae-400b-a63b-96e0bedc89c9-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-wknrq\" (UID: \"420b2c52-73ae-400b-a63b-96e0bedc89c9\") " pod="calico-system/goldmane-5b85766d88-wknrq" Apr 17 23:33:59.257401 kubelet[2504]: I0417 23:33:59.257330 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/72df670f-46f7-4a37-8c88-a75339da8060-config-volume\") pod \"coredns-674b8bbfcf-pfnkx\" (UID: \"72df670f-46f7-4a37-8c88-a75339da8060\") " pod="kube-system/coredns-674b8bbfcf-pfnkx" Apr 17 23:33:59.257401 kubelet[2504]: I0417 23:33:59.257395 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcjdp\" (UniqueName: \"kubernetes.io/projected/72df670f-46f7-4a37-8c88-a75339da8060-kube-api-access-lcjdp\") pod \"coredns-674b8bbfcf-pfnkx\" (UID: \"72df670f-46f7-4a37-8c88-a75339da8060\") " pod="kube-system/coredns-674b8bbfcf-pfnkx" Apr 17 23:33:59.258840 kubelet[2504]: I0417 23:33:59.257409 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/420b2c52-73ae-400b-a63b-96e0bedc89c9-goldmane-key-pair\") pod \"goldmane-5b85766d88-wknrq\" (UID: \"420b2c52-73ae-400b-a63b-96e0bedc89c9\") " pod="calico-system/goldmane-5b85766d88-wknrq" Apr 17 23:33:59.258951 kubelet[2504]: I0417 23:33:59.258762 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnv86\" (UniqueName: \"kubernetes.io/projected/bd0cce6d-84a0-4bd0-9b88-65c21aa33a1b-kube-api-access-pnv86\") pod \"calico-apiserver-59fdff4ffb-9pwcd\" (UID: \"bd0cce6d-84a0-4bd0-9b88-65c21aa33a1b\") " pod="calico-system/calico-apiserver-59fdff4ffb-9pwcd" Apr 17 23:33:59.259045 kubelet[2504]: I0417 23:33:59.258958 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bd0cce6d-84a0-4bd0-9b88-65c21aa33a1b-calico-apiserver-certs\") pod \"calico-apiserver-59fdff4ffb-9pwcd\" (UID: \"bd0cce6d-84a0-4bd0-9b88-65c21aa33a1b\") " pod="calico-system/calico-apiserver-59fdff4ffb-9pwcd" Apr 17 23:33:59.277688 containerd[1462]: time="2026-04-17T23:33:59.277577370Z" level=info msg="CreateContainer within sandbox \"bce8daf084ec6ef1759d2766b9391a1704c8fc30b750c9b24a1c2489b33cd9eb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:33:59.280605 systemd[1]: Created slice kubepods-burstable-pod72df670f_46f7_4a37_8c88_a75339da8060.slice - libcontainer container kubepods-burstable-pod72df670f_46f7_4a37_8c88_a75339da8060.slice. Apr 17 23:33:59.288810 systemd[1]: Created slice kubepods-besteffort-podbd0cce6d_84a0_4bd0_9b88_65c21aa33a1b.slice - libcontainer container kubepods-besteffort-podbd0cce6d_84a0_4bd0_9b88_65c21aa33a1b.slice. Apr 17 23:33:59.298594 systemd[1]: Created slice kubepods-besteffort-podab6d3304_1ee2_4cfe_846f_4d75ab580639.slice - libcontainer container kubepods-besteffort-podab6d3304_1ee2_4cfe_846f_4d75ab580639.slice. Apr 17 23:33:59.301905 systemd[1]: Created slice kubepods-besteffort-pod4dd72ea7_b70c_42f1_9ae1_4082b989f41b.slice - libcontainer container kubepods-besteffort-pod4dd72ea7_b70c_42f1_9ae1_4082b989f41b.slice. Apr 17 23:33:59.308200 systemd[1]: Created slice kubepods-besteffort-pod420b2c52_73ae_400b_a63b_96e0bedc89c9.slice - libcontainer container kubepods-besteffort-pod420b2c52_73ae_400b_a63b_96e0bedc89c9.slice. Apr 17 23:33:59.309240 containerd[1462]: time="2026-04-17T23:33:59.305166657Z" level=info msg="CreateContainer within sandbox \"bce8daf084ec6ef1759d2766b9391a1704c8fc30b750c9b24a1c2489b33cd9eb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"47ecc5ac2bcf06a6388ed11c386d7977d7da97f2e81f82d3184a35eb7627b8eb\"" Apr 17 23:33:59.309730 containerd[1462]: time="2026-04-17T23:33:59.309710017Z" level=info msg="StartContainer for \"47ecc5ac2bcf06a6388ed11c386d7977d7da97f2e81f82d3184a35eb7627b8eb\"" Apr 17 23:33:59.315096 systemd[1]: Created slice kubepods-besteffort-podbd369009_82a8_4ff9_89b2_990a5a426bba.slice - libcontainer container kubepods-besteffort-podbd369009_82a8_4ff9_89b2_990a5a426bba.slice. Apr 17 23:33:59.350382 systemd[1]: Started cri-containerd-47ecc5ac2bcf06a6388ed11c386d7977d7da97f2e81f82d3184a35eb7627b8eb.scope - libcontainer container 47ecc5ac2bcf06a6388ed11c386d7977d7da97f2e81f82d3184a35eb7627b8eb. Apr 17 23:33:59.362503 kubelet[2504]: I0417 23:33:59.360712 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/bd369009-82a8-4ff9-89b2-990a5a426bba-nginx-config\") pod \"whisker-65dd96956d-nxrcj\" (UID: \"bd369009-82a8-4ff9-89b2-990a5a426bba\") " pod="calico-system/whisker-65dd96956d-nxrcj" Apr 17 23:33:59.362503 kubelet[2504]: I0417 23:33:59.360903 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bd369009-82a8-4ff9-89b2-990a5a426bba-whisker-backend-key-pair\") pod \"whisker-65dd96956d-nxrcj\" (UID: \"bd369009-82a8-4ff9-89b2-990a5a426bba\") " pod="calico-system/whisker-65dd96956d-nxrcj" Apr 17 23:33:59.362503 kubelet[2504]: I0417 23:33:59.361436 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd369009-82a8-4ff9-89b2-990a5a426bba-whisker-ca-bundle\") pod \"whisker-65dd96956d-nxrcj\" (UID: \"bd369009-82a8-4ff9-89b2-990a5a426bba\") " pod="calico-system/whisker-65dd96956d-nxrcj" Apr 17 23:33:59.362503 kubelet[2504]: I0417 23:33:59.361453 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzngf\" (UniqueName: \"kubernetes.io/projected/bd369009-82a8-4ff9-89b2-990a5a426bba-kube-api-access-lzngf\") pod \"whisker-65dd96956d-nxrcj\" (UID: \"bd369009-82a8-4ff9-89b2-990a5a426bba\") " pod="calico-system/whisker-65dd96956d-nxrcj" Apr 17 23:33:59.389514 containerd[1462]: time="2026-04-17T23:33:59.389444907Z" level=info msg="StartContainer for \"47ecc5ac2bcf06a6388ed11c386d7977d7da97f2e81f82d3184a35eb7627b8eb\" returns successfully" Apr 17 23:33:59.560692 kubelet[2504]: E0417 23:33:59.560627 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:59.561575 containerd[1462]: time="2026-04-17T23:33:59.561539423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9l5kq,Uid:705e5c5c-1430-4917-b511-364bd6cc7cb4,Namespace:kube-system,Attempt:0,}" Apr 17 23:33:59.585716 kubelet[2504]: E0417 23:33:59.585327 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:33:59.587462 containerd[1462]: time="2026-04-17T23:33:59.587076475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pfnkx,Uid:72df670f-46f7-4a37-8c88-a75339da8060,Namespace:kube-system,Attempt:0,}" Apr 17 23:33:59.593369 containerd[1462]: time="2026-04-17T23:33:59.592830674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59fdff4ffb-9pwcd,Uid:bd0cce6d-84a0-4bd0-9b88-65c21aa33a1b,Namespace:calico-system,Attempt:0,}" Apr 17 23:33:59.602088 containerd[1462]: time="2026-04-17T23:33:59.601407003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c59d9f498-7krgp,Uid:ab6d3304-1ee2-4cfe-846f-4d75ab580639,Namespace:calico-system,Attempt:0,}" Apr 17 23:33:59.606243 containerd[1462]: time="2026-04-17T23:33:59.606187682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59fdff4ffb-2l7pk,Uid:4dd72ea7-b70c-42f1-9ae1-4082b989f41b,Namespace:calico-system,Attempt:0,}" Apr 17 23:33:59.615073 containerd[1462]: time="2026-04-17T23:33:59.614982837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-wknrq,Uid:420b2c52-73ae-400b-a63b-96e0bedc89c9,Namespace:calico-system,Attempt:0,}" Apr 17 23:33:59.623467 containerd[1462]: time="2026-04-17T23:33:59.623338496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65dd96956d-nxrcj,Uid:bd369009-82a8-4ff9-89b2-990a5a426bba,Namespace:calico-system,Attempt:0,}" Apr 17 23:34:00.140488 systemd[1]: Created slice kubepods-besteffort-podca6b2b6e_bb01_4db2_9121_3bab00f81e9d.slice - libcontainer container kubepods-besteffort-podca6b2b6e_bb01_4db2_9121_3bab00f81e9d.slice. Apr 17 23:34:00.143610 containerd[1462]: time="2026-04-17T23:34:00.143568949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-99tqr,Uid:ca6b2b6e-bb01-4db2-9121-3bab00f81e9d,Namespace:calico-system,Attempt:0,}" Apr 17 23:34:00.266072 kubelet[2504]: I0417 23:34:00.265961 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-k8lqj" podStartSLOduration=3.129710008 podStartE2EDuration="13.265941797s" podCreationTimestamp="2026-04-17 23:33:47 +0000 UTC" firstStartedPulling="2026-04-17 23:33:48.305543288 +0000 UTC m=+19.254816889" lastFinishedPulling="2026-04-17 23:33:58.441775076 +0000 UTC m=+29.391048678" observedRunningTime="2026-04-17 23:34:00.264257184 +0000 UTC m=+31.213530799" watchObservedRunningTime="2026-04-17 23:34:00.265941797 +0000 UTC m=+31.215215427" Apr 17 23:34:01.011985 systemd-networkd[1388]: cali237b5786835: Link UP Apr 17 23:34:01.012490 systemd-networkd[1388]: cali237b5786835: Gained carrier Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:33:59.780 [ERROR][3534] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:33:59.806 [INFO][3534] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--wknrq-eth0 goldmane-5b85766d88- calico-system 420b2c52-73ae-400b-a63b-96e0bedc89c9 842 0 2026-04-17 23:33:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-wknrq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali237b5786835 [] [] }} ContainerID="4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63" Namespace="calico-system" Pod="goldmane-5b85766d88-wknrq" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--wknrq-" Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:33:59.806 [INFO][3534] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63" Namespace="calico-system" Pod="goldmane-5b85766d88-wknrq" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--wknrq-eth0" Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:33:59.863 [INFO][3579] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63" HandleID="k8s-pod-network.4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63" Workload="localhost-k8s-goldmane--5b85766d88--wknrq-eth0" Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:33:59.870 [INFO][3579] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63" HandleID="k8s-pod-network.4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63" Workload="localhost-k8s-goldmane--5b85766d88--wknrq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037c4a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-wknrq", "timestamp":"2026-04-17 23:33:59.863805755 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00017a840)} Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:33:59.870 [INFO][3579] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:33:59.870 [INFO][3579] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:33:59.870 [INFO][3579] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:33:59.874 [INFO][3579] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63" host="localhost" Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:33:59.889 [INFO][3579] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:33:59.961 [INFO][3579] ipam/ipam.go 1965: Failed to create global IPAM config; another node got there first. Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:34:00.969 [INFO][3579] ipam/ipam.go 558: Ran out of existing affine blocks for host host="localhost" Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:34:00.974 [INFO][3579] ipam/ipam.go 575: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="localhost" Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:34:00.976 [INFO][3579] ipam/ipam.go 588: Found unclaimed block in 2.74931ms host="localhost" subnet=192.168.88.128/26 Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:34:00.976 [INFO][3579] ipam/ipam_block_reader_writer.go 175: Trying to create affinity in pending state host="localhost" subnet=192.168.88.128/26 Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:34:00.981 [INFO][3579] ipam/ipam_block_reader_writer.go 186: Block affinity already exists, getting existing affinity host="localhost" subnet=192.168.88.128/26 Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:34:00.984 [INFO][3579] ipam/ipam_block_reader_writer.go 194: Got existing affinity host="localhost" subnet=192.168.88.128/26 Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:34:00.984 [INFO][3579] ipam/ipam_block_reader_writer.go 202: Existing affinity is already confirmed host="localhost" subnet=192.168.88.128/26 Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:34:00.984 [INFO][3579] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:34:00.986 [INFO][3579] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:34:00.986 [INFO][3579] ipam/ipam.go 623: Block '192.168.88.128/26' has 63 free ips which is more than 1 ips required. host="localhost" subnet=192.168.88.128/26 Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:34:00.986 [INFO][3579] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63" host="localhost" Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:34:00.988 [INFO][3579] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63 Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:34:00.994 [INFO][3579] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63" host="localhost" Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:34:00.998 [INFO][3579] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63" host="localhost" Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:34:00.998 [INFO][3579] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63" host="localhost" Apr 17 23:34:01.030037 containerd[1462]: 2026-04-17 23:34:00.998 [INFO][3579] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:34:01.031213 containerd[1462]: 2026-04-17 23:34:00.998 [INFO][3579] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63" HandleID="k8s-pod-network.4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63" Workload="localhost-k8s-goldmane--5b85766d88--wknrq-eth0" Apr 17 23:34:01.031213 containerd[1462]: 2026-04-17 23:34:01.002 [INFO][3534] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63" Namespace="calico-system" Pod="goldmane-5b85766d88-wknrq" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--wknrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--wknrq-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"420b2c52-73ae-400b-a63b-96e0bedc89c9", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-wknrq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali237b5786835", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:34:01.031213 containerd[1462]: 2026-04-17 23:34:01.002 [INFO][3534] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63" Namespace="calico-system" Pod="goldmane-5b85766d88-wknrq" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--wknrq-eth0" Apr 17 23:34:01.031213 containerd[1462]: 2026-04-17 23:34:01.002 [INFO][3534] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali237b5786835 ContainerID="4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63" Namespace="calico-system" Pod="goldmane-5b85766d88-wknrq" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--wknrq-eth0" Apr 17 23:34:01.031213 containerd[1462]: 2026-04-17 23:34:01.012 [INFO][3534] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63" Namespace="calico-system" Pod="goldmane-5b85766d88-wknrq" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--wknrq-eth0" Apr 17 23:34:01.031213 containerd[1462]: 2026-04-17 23:34:01.013 [INFO][3534] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63" Namespace="calico-system" Pod="goldmane-5b85766d88-wknrq" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--wknrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--wknrq-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"420b2c52-73ae-400b-a63b-96e0bedc89c9", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63", Pod:"goldmane-5b85766d88-wknrq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali237b5786835", MAC:"72:e3:33:7e:6b:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:34:01.031213 containerd[1462]: 2026-04-17 23:34:01.027 [INFO][3534] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63" Namespace="calico-system" Pod="goldmane-5b85766d88-wknrq" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--wknrq-eth0" Apr 17 23:34:01.053985 systemd-networkd[1388]: cali5b7c7cac752: Link UP Apr 17 23:34:01.055160 systemd-networkd[1388]: cali5b7c7cac752: Gained carrier Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:33:59.770 [ERROR][3475] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:33:59.802 [INFO][3475] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59fdff4ffb--9pwcd-eth0 calico-apiserver-59fdff4ffb- calico-system bd0cce6d-84a0-4bd0-9b88-65c21aa33a1b 840 0 2026-04-17 23:33:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59fdff4ffb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59fdff4ffb-9pwcd eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali5b7c7cac752 [] [] }} ContainerID="f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e" Namespace="calico-system" Pod="calico-apiserver-59fdff4ffb-9pwcd" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fdff4ffb--9pwcd-" Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:33:59.802 [INFO][3475] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e" Namespace="calico-system" Pod="calico-apiserver-59fdff4ffb-9pwcd" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fdff4ffb--9pwcd-eth0" Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:33:59.880 [INFO][3571] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e" HandleID="k8s-pod-network.f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e" Workload="localhost-k8s-calico--apiserver--59fdff4ffb--9pwcd-eth0" Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:33:59.886 [INFO][3571] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e" HandleID="k8s-pod-network.f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e" Workload="localhost-k8s-calico--apiserver--59fdff4ffb--9pwcd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000411b00), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-59fdff4ffb-9pwcd", "timestamp":"2026-04-17 23:33:59.8805647 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000517a20)} Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:33:59.886 [INFO][3571] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:34:00.998 [INFO][3571] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:34:00.999 [INFO][3571] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:34:01.002 [INFO][3571] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e" host="localhost" Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:34:01.007 [INFO][3571] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:34:01.018 [INFO][3571] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:34:01.022 [INFO][3571] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:34:01.029 [INFO][3571] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:34:01.029 [INFO][3571] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e" host="localhost" Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:34:01.031 [INFO][3571] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:34:01.037 [INFO][3571] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e" host="localhost" Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:34:01.048 [INFO][3571] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e" host="localhost" Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:34:01.049 [INFO][3571] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e" host="localhost" Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:34:01.049 [INFO][3571] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:34:01.070341 containerd[1462]: 2026-04-17 23:34:01.049 [INFO][3571] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e" HandleID="k8s-pod-network.f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e" Workload="localhost-k8s-calico--apiserver--59fdff4ffb--9pwcd-eth0" Apr 17 23:34:01.071173 containerd[1462]: 2026-04-17 23:34:01.051 [INFO][3475] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e" Namespace="calico-system" Pod="calico-apiserver-59fdff4ffb-9pwcd" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fdff4ffb--9pwcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59fdff4ffb--9pwcd-eth0", GenerateName:"calico-apiserver-59fdff4ffb-", Namespace:"calico-system", SelfLink:"", UID:"bd0cce6d-84a0-4bd0-9b88-65c21aa33a1b", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59fdff4ffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59fdff4ffb-9pwcd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5b7c7cac752", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:34:01.071173 containerd[1462]: 2026-04-17 23:34:01.051 [INFO][3475] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e" Namespace="calico-system" Pod="calico-apiserver-59fdff4ffb-9pwcd" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fdff4ffb--9pwcd-eth0" Apr 17 23:34:01.071173 containerd[1462]: 2026-04-17 23:34:01.051 [INFO][3475] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b7c7cac752 ContainerID="f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e" Namespace="calico-system" Pod="calico-apiserver-59fdff4ffb-9pwcd" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fdff4ffb--9pwcd-eth0" Apr 17 23:34:01.071173 containerd[1462]: 2026-04-17 23:34:01.054 [INFO][3475] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e" Namespace="calico-system" Pod="calico-apiserver-59fdff4ffb-9pwcd" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fdff4ffb--9pwcd-eth0" Apr 17 23:34:01.071173 containerd[1462]: 2026-04-17 23:34:01.054 [INFO][3475] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e" Namespace="calico-system" Pod="calico-apiserver-59fdff4ffb-9pwcd" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fdff4ffb--9pwcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59fdff4ffb--9pwcd-eth0", GenerateName:"calico-apiserver-59fdff4ffb-", Namespace:"calico-system", SelfLink:"", UID:"bd0cce6d-84a0-4bd0-9b88-65c21aa33a1b", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59fdff4ffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e", Pod:"calico-apiserver-59fdff4ffb-9pwcd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali5b7c7cac752", MAC:"6a:82:03:4d:9a:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:34:01.071173 containerd[1462]: 2026-04-17 23:34:01.068 [INFO][3475] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e" Namespace="calico-system" Pod="calico-apiserver-59fdff4ffb-9pwcd" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fdff4ffb--9pwcd-eth0" Apr 17 23:34:01.092362 containerd[1462]: time="2026-04-17T23:34:01.091888376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:01.092362 containerd[1462]: time="2026-04-17T23:34:01.091932099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:01.092362 containerd[1462]: time="2026-04-17T23:34:01.091941207Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:01.094862 containerd[1462]: time="2026-04-17T23:34:01.091992295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:01.115485 containerd[1462]: time="2026-04-17T23:34:01.115385530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:01.115617 containerd[1462]: time="2026-04-17T23:34:01.115482741Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:01.115617 containerd[1462]: time="2026-04-17T23:34:01.115521122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:01.115961 containerd[1462]: time="2026-04-17T23:34:01.115783131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:01.156190 systemd[1]: Started cri-containerd-4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63.scope - libcontainer container 4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63. Apr 17 23:34:01.169309 systemd[1]: Started cri-containerd-f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e.scope - libcontainer container f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e. Apr 17 23:34:01.173857 systemd-networkd[1388]: calia3c9a76f7c5: Link UP Apr 17 23:34:01.174351 systemd-networkd[1388]: calia3c9a76f7c5: Gained carrier Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:33:59.784 [ERROR][3494] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:33:59.798 [INFO][3494] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7c59d9f498--7krgp-eth0 calico-kube-controllers-7c59d9f498- calico-system ab6d3304-1ee2-4cfe-846f-4d75ab580639 841 0 2026-04-17 23:33:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c59d9f498 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7c59d9f498-7krgp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia3c9a76f7c5 [] [] }} ContainerID="1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d" Namespace="calico-system" Pod="calico-kube-controllers-7c59d9f498-7krgp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c59d9f498--7krgp-" Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:33:59.799 [INFO][3494] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d" Namespace="calico-system" Pod="calico-kube-controllers-7c59d9f498-7krgp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c59d9f498--7krgp-eth0" Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:33:59.893 [INFO][3560] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d" HandleID="k8s-pod-network.1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d" Workload="localhost-k8s-calico--kube--controllers--7c59d9f498--7krgp-eth0" Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:33:59.960 [INFO][3560] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d" HandleID="k8s-pod-network.1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d" Workload="localhost-k8s-calico--kube--controllers--7c59d9f498--7krgp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7c59d9f498-7krgp", "timestamp":"2026-04-17 23:33:59.893294278 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00060d4a0)} Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:33:59.960 [INFO][3560] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:34:01.049 [INFO][3560] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:34:01.049 [INFO][3560] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:34:01.106 [INFO][3560] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d" host="localhost" Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:34:01.120 [INFO][3560] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:34:01.131 [INFO][3560] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:34:01.139 [INFO][3560] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:34:01.143 [INFO][3560] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:34:01.143 [INFO][3560] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d" host="localhost" Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:34:01.146 [INFO][3560] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:34:01.153 [INFO][3560] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d" host="localhost" Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:34:01.164 [INFO][3560] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d" host="localhost" Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:34:01.164 [INFO][3560] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d" host="localhost" Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:34:01.164 [INFO][3560] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:34:01.195398 containerd[1462]: 2026-04-17 23:34:01.164 [INFO][3560] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d" HandleID="k8s-pod-network.1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d" Workload="localhost-k8s-calico--kube--controllers--7c59d9f498--7krgp-eth0" Apr 17 23:34:01.195903 containerd[1462]: 2026-04-17 23:34:01.170 [INFO][3494] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d" Namespace="calico-system" Pod="calico-kube-controllers-7c59d9f498-7krgp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c59d9f498--7krgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c59d9f498--7krgp-eth0", GenerateName:"calico-kube-controllers-7c59d9f498-", Namespace:"calico-system", SelfLink:"", UID:"ab6d3304-1ee2-4cfe-846f-4d75ab580639", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 33, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c59d9f498", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7c59d9f498-7krgp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3c9a76f7c5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:34:01.195903 containerd[1462]: 2026-04-17 23:34:01.171 [INFO][3494] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d" Namespace="calico-system" Pod="calico-kube-controllers-7c59d9f498-7krgp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c59d9f498--7krgp-eth0" Apr 17 23:34:01.195903 containerd[1462]: 2026-04-17 23:34:01.171 [INFO][3494] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3c9a76f7c5 ContainerID="1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d" Namespace="calico-system" Pod="calico-kube-controllers-7c59d9f498-7krgp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c59d9f498--7krgp-eth0" Apr 17 23:34:01.195903 containerd[1462]: 2026-04-17 23:34:01.177 [INFO][3494] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d" Namespace="calico-system" Pod="calico-kube-controllers-7c59d9f498-7krgp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c59d9f498--7krgp-eth0" Apr 17 23:34:01.195903 containerd[1462]: 2026-04-17 23:34:01.178 [INFO][3494] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d" Namespace="calico-system" Pod="calico-kube-controllers-7c59d9f498-7krgp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c59d9f498--7krgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7c59d9f498--7krgp-eth0", GenerateName:"calico-kube-controllers-7c59d9f498-", Namespace:"calico-system", SelfLink:"", UID:"ab6d3304-1ee2-4cfe-846f-4d75ab580639", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 33, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c59d9f498", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d", Pod:"calico-kube-controllers-7c59d9f498-7krgp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3c9a76f7c5", MAC:"e2:ed:ef:de:fc:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:34:01.195903 containerd[1462]: 2026-04-17 23:34:01.192 [INFO][3494] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d" Namespace="calico-system" Pod="calico-kube-controllers-7c59d9f498-7krgp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7c59d9f498--7krgp-eth0" Apr 17 23:34:01.202314 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:34:01.225537 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:34:01.245811 kubelet[2504]: I0417 23:34:01.245743 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:34:01.247489 containerd[1462]: time="2026-04-17T23:34:01.247272041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:01.247489 containerd[1462]: time="2026-04-17T23:34:01.247349132Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:01.247489 containerd[1462]: time="2026-04-17T23:34:01.247358469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:01.247489 containerd[1462]: time="2026-04-17T23:34:01.247417990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:01.274181 containerd[1462]: time="2026-04-17T23:34:01.273867616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-wknrq,Uid:420b2c52-73ae-400b-a63b-96e0bedc89c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63\"" Apr 17 23:34:01.278968 containerd[1462]: time="2026-04-17T23:34:01.278121702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:34:01.289353 systemd[1]: Started cri-containerd-1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d.scope - libcontainer container 1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d. Apr 17 23:34:01.292525 systemd-networkd[1388]: calif457a82d6d5: Link UP Apr 17 23:34:01.293496 systemd-networkd[1388]: calif457a82d6d5: Gained carrier Apr 17 23:34:01.322779 containerd[1462]: time="2026-04-17T23:34:01.322522460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59fdff4ffb-9pwcd,Uid:bd0cce6d-84a0-4bd0-9b88-65c21aa33a1b,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e\"" Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:33:59.764 [ERROR][3450] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:33:59.803 [INFO][3450] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--9l5kq-eth0 coredns-674b8bbfcf- kube-system 705e5c5c-1430-4917-b511-364bd6cc7cb4 835 0 2026-04-17 23:33:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-9l5kq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif457a82d6d5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb" Namespace="kube-system" Pod="coredns-674b8bbfcf-9l5kq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9l5kq-" Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:33:59.803 [INFO][3450] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb" Namespace="kube-system" Pod="coredns-674b8bbfcf-9l5kq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9l5kq-eth0" Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:33:59.884 [INFO][3568] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb" HandleID="k8s-pod-network.617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb" Workload="localhost-k8s-coredns--674b8bbfcf--9l5kq-eth0" Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:33:59.967 [INFO][3568] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb" HandleID="k8s-pod-network.617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb" Workload="localhost-k8s-coredns--674b8bbfcf--9l5kq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001b1ba0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-9l5kq", "timestamp":"2026-04-17 23:33:59.884210031 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00013b080)} Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:33:59.967 [INFO][3568] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:34:01.164 [INFO][3568] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:34:01.164 [INFO][3568] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:34:01.204 [INFO][3568] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb" host="localhost" Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:34:01.224 [INFO][3568] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:34:01.233 [INFO][3568] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:34:01.236 [INFO][3568] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:34:01.239 [INFO][3568] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:34:01.242 [INFO][3568] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb" host="localhost" Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:34:01.249 [INFO][3568] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:34:01.256 [INFO][3568] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb" host="localhost" Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:34:01.270 [INFO][3568] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb" host="localhost" Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:34:01.270 [INFO][3568] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb" host="localhost" Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:34:01.270 [INFO][3568] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:34:01.341317 containerd[1462]: 2026-04-17 23:34:01.270 [INFO][3568] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb" HandleID="k8s-pod-network.617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb" Workload="localhost-k8s-coredns--674b8bbfcf--9l5kq-eth0" Apr 17 23:34:01.341885 containerd[1462]: 2026-04-17 23:34:01.286 [INFO][3450] cni-plugin/k8s.go 418: Populated endpoint ContainerID="617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb" Namespace="kube-system" Pod="coredns-674b8bbfcf-9l5kq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9l5kq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9l5kq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"705e5c5c-1430-4917-b511-364bd6cc7cb4", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 33, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-9l5kq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif457a82d6d5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:34:01.341885 containerd[1462]: 2026-04-17 23:34:01.286 [INFO][3450] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb" Namespace="kube-system" Pod="coredns-674b8bbfcf-9l5kq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9l5kq-eth0" Apr 17 23:34:01.341885 containerd[1462]: 2026-04-17 23:34:01.287 [INFO][3450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif457a82d6d5 ContainerID="617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb" Namespace="kube-system" Pod="coredns-674b8bbfcf-9l5kq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9l5kq-eth0" Apr 17 23:34:01.341885 containerd[1462]: 2026-04-17 23:34:01.291 [INFO][3450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb" Namespace="kube-system" Pod="coredns-674b8bbfcf-9l5kq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9l5kq-eth0" Apr 17 23:34:01.341885 containerd[1462]: 2026-04-17 23:34:01.315 [INFO][3450] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb" Namespace="kube-system" Pod="coredns-674b8bbfcf-9l5kq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9l5kq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--9l5kq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"705e5c5c-1430-4917-b511-364bd6cc7cb4", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 33, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb", Pod:"coredns-674b8bbfcf-9l5kq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif457a82d6d5", MAC:"b6:c6:18:8e:fa:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:34:01.341885 containerd[1462]: 2026-04-17 23:34:01.332 [INFO][3450] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb" Namespace="kube-system" Pod="coredns-674b8bbfcf-9l5kq" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--9l5kq-eth0" Apr 17 23:34:01.355908 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:34:01.374132 systemd-networkd[1388]: calia1167ea1e24: Link UP Apr 17 23:34:01.374740 systemd-networkd[1388]: calia1167ea1e24: Gained carrier Apr 17 23:34:01.384200 containerd[1462]: time="2026-04-17T23:34:01.382735148Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:01.384200 containerd[1462]: time="2026-04-17T23:34:01.382806339Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:01.384200 containerd[1462]: time="2026-04-17T23:34:01.383459289Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:01.384200 containerd[1462]: time="2026-04-17T23:34:01.383538937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:33:59.793 [ERROR][3519] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:33:59.810 [INFO][3519] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--65dd96956d--nxrcj-eth0 whisker-65dd96956d- calico-system bd369009-82a8-4ff9-89b2-990a5a426bba 858 0 2026-04-17 23:33:50 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65dd96956d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-65dd96956d-nxrcj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia1167ea1e24 [] [] }} ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Namespace="calico-system" Pod="whisker-65dd96956d-nxrcj" WorkloadEndpoint="localhost-k8s-whisker--65dd96956d--nxrcj-" Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:33:59.810 [INFO][3519] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Namespace="calico-system" Pod="whisker-65dd96956d-nxrcj" WorkloadEndpoint="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:33:59.965 [INFO][3578] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" HandleID="k8s-pod-network.273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Workload="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:33:59.974 [INFO][3578] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" HandleID="k8s-pod-network.273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Workload="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000138780), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-65dd96956d-nxrcj", "timestamp":"2026-04-17 23:33:59.965449154 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003dedc0)} Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:33:59.974 [INFO][3578] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:34:01.281 [INFO][3578] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:34:01.282 [INFO][3578] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:34:01.310 [INFO][3578] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" host="localhost" Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:34:01.327 [INFO][3578] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:34:01.340 [INFO][3578] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:34:01.344 [INFO][3578] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:34:01.346 [INFO][3578] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:34:01.346 [INFO][3578] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" host="localhost" Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:34:01.349 [INFO][3578] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:34:01.357 [INFO][3578] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" host="localhost" Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:34:01.365 [INFO][3578] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" host="localhost" Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:34:01.366 [INFO][3578] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" host="localhost" Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:34:01.367 [INFO][3578] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:34:01.403167 containerd[1462]: 2026-04-17 23:34:01.367 [INFO][3578] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" HandleID="k8s-pod-network.273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Workload="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" Apr 17 23:34:01.403797 containerd[1462]: 2026-04-17 23:34:01.370 [INFO][3519] cni-plugin/k8s.go 418: Populated endpoint ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Namespace="calico-system" Pod="whisker-65dd96956d-nxrcj" WorkloadEndpoint="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--65dd96956d--nxrcj-eth0", GenerateName:"whisker-65dd96956d-", Namespace:"calico-system", SelfLink:"", UID:"bd369009-82a8-4ff9-89b2-990a5a426bba", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 33, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65dd96956d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-65dd96956d-nxrcj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia1167ea1e24", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:34:01.403797 containerd[1462]: 2026-04-17 23:34:01.371 [INFO][3519] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Namespace="calico-system" Pod="whisker-65dd96956d-nxrcj" WorkloadEndpoint="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" Apr 17 23:34:01.403797 containerd[1462]: 2026-04-17 23:34:01.371 [INFO][3519] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia1167ea1e24 ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Namespace="calico-system" Pod="whisker-65dd96956d-nxrcj" WorkloadEndpoint="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" Apr 17 23:34:01.403797 containerd[1462]: 2026-04-17 23:34:01.375 [INFO][3519] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Namespace="calico-system" Pod="whisker-65dd96956d-nxrcj" WorkloadEndpoint="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" Apr 17 23:34:01.403797 containerd[1462]: 2026-04-17 23:34:01.377 [INFO][3519] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Namespace="calico-system" Pod="whisker-65dd96956d-nxrcj" WorkloadEndpoint="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--65dd96956d--nxrcj-eth0", GenerateName:"whisker-65dd96956d-", Namespace:"calico-system", SelfLink:"", UID:"bd369009-82a8-4ff9-89b2-990a5a426bba", ResourceVersion:"858", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 33, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65dd96956d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b", Pod:"whisker-65dd96956d-nxrcj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia1167ea1e24", MAC:"0e:77:cc:b1:23:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:34:01.403797 containerd[1462]: 2026-04-17 23:34:01.396 [INFO][3519] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Namespace="calico-system" Pod="whisker-65dd96956d-nxrcj" WorkloadEndpoint="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" Apr 17 23:34:01.405816 containerd[1462]: time="2026-04-17T23:34:01.405772224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c59d9f498-7krgp,Uid:ab6d3304-1ee2-4cfe-846f-4d75ab580639,Namespace:calico-system,Attempt:0,} returns sandbox id \"1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d\"" Apr 17 23:34:01.432137 containerd[1462]: time="2026-04-17T23:34:01.430964170Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:01.432137 containerd[1462]: time="2026-04-17T23:34:01.432018726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:01.432137 containerd[1462]: time="2026-04-17T23:34:01.432048915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:01.432288 systemd[1]: Started cri-containerd-617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb.scope - libcontainer container 617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb. Apr 17 23:34:01.432731 containerd[1462]: time="2026-04-17T23:34:01.432451283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:01.445685 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:34:01.462450 systemd[1]: Started cri-containerd-273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b.scope - libcontainer container 273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b. Apr 17 23:34:01.474106 systemd-networkd[1388]: cali898204fe1f7: Link UP Apr 17 23:34:01.474588 systemd-networkd[1388]: cali898204fe1f7: Gained carrier Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:33:59.780 [ERROR][3490] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:33:59.811 [INFO][3490] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--59fdff4ffb--2l7pk-eth0 calico-apiserver-59fdff4ffb- calico-system 4dd72ea7-b70c-42f1-9ae1-4082b989f41b 846 0 2026-04-17 23:33:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:59fdff4ffb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-59fdff4ffb-2l7pk eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali898204fe1f7 [] [] }} ContainerID="d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d" Namespace="calico-system" Pod="calico-apiserver-59fdff4ffb-2l7pk" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fdff4ffb--2l7pk-" Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:33:59.811 [INFO][3490] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d" Namespace="calico-system" Pod="calico-apiserver-59fdff4ffb-2l7pk" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fdff4ffb--2l7pk-eth0" Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:33:59.962 [INFO][3580] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d" HandleID="k8s-pod-network.d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d" Workload="localhost-k8s-calico--apiserver--59fdff4ffb--2l7pk-eth0" Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:33:59.975 [INFO][3580] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d" HandleID="k8s-pod-network.d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d" Workload="localhost-k8s-calico--apiserver--59fdff4ffb--2l7pk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139bb0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-59fdff4ffb-2l7pk", "timestamp":"2026-04-17 23:33:59.962331834 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000326580)} Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:33:59.976 [INFO][3580] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:34:01.366 [INFO][3580] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:34:01.366 [INFO][3580] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:34:01.406 [INFO][3580] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d" host="localhost" Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:34:01.423 [INFO][3580] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:34:01.437 [INFO][3580] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:34:01.439 [INFO][3580] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:34:01.442 [INFO][3580] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:34:01.442 [INFO][3580] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d" host="localhost" Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:34:01.444 [INFO][3580] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:34:01.450 [INFO][3580] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d" host="localhost" Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:34:01.462 [INFO][3580] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d" host="localhost" Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:34:01.462 [INFO][3580] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d" host="localhost" Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:34:01.463 [INFO][3580] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:34:01.491020 containerd[1462]: 2026-04-17 23:34:01.463 [INFO][3580] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d" HandleID="k8s-pod-network.d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d" Workload="localhost-k8s-calico--apiserver--59fdff4ffb--2l7pk-eth0" Apr 17 23:34:01.491475 containerd[1462]: 2026-04-17 23:34:01.468 [INFO][3490] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d" Namespace="calico-system" Pod="calico-apiserver-59fdff4ffb-2l7pk" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fdff4ffb--2l7pk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59fdff4ffb--2l7pk-eth0", GenerateName:"calico-apiserver-59fdff4ffb-", Namespace:"calico-system", SelfLink:"", UID:"4dd72ea7-b70c-42f1-9ae1-4082b989f41b", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59fdff4ffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-59fdff4ffb-2l7pk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali898204fe1f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:34:01.491475 containerd[1462]: 2026-04-17 23:34:01.470 [INFO][3490] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d" Namespace="calico-system" Pod="calico-apiserver-59fdff4ffb-2l7pk" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fdff4ffb--2l7pk-eth0" Apr 17 23:34:01.491475 containerd[1462]: 2026-04-17 23:34:01.470 [INFO][3490] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali898204fe1f7 ContainerID="d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d" Namespace="calico-system" Pod="calico-apiserver-59fdff4ffb-2l7pk" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fdff4ffb--2l7pk-eth0" Apr 17 23:34:01.491475 containerd[1462]: 2026-04-17 23:34:01.475 [INFO][3490] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d" Namespace="calico-system" Pod="calico-apiserver-59fdff4ffb-2l7pk" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fdff4ffb--2l7pk-eth0" Apr 17 23:34:01.491475 containerd[1462]: 2026-04-17 23:34:01.475 [INFO][3490] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d" Namespace="calico-system" Pod="calico-apiserver-59fdff4ffb-2l7pk" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fdff4ffb--2l7pk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--59fdff4ffb--2l7pk-eth0", GenerateName:"calico-apiserver-59fdff4ffb-", Namespace:"calico-system", SelfLink:"", UID:"4dd72ea7-b70c-42f1-9ae1-4082b989f41b", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"59fdff4ffb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d", Pod:"calico-apiserver-59fdff4ffb-2l7pk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali898204fe1f7", MAC:"5a:79:74:05:87:e9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:34:01.491475 containerd[1462]: 2026-04-17 23:34:01.488 [INFO][3490] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d" Namespace="calico-system" Pod="calico-apiserver-59fdff4ffb-2l7pk" WorkloadEndpoint="localhost-k8s-calico--apiserver--59fdff4ffb--2l7pk-eth0" Apr 17 23:34:01.499960 containerd[1462]: time="2026-04-17T23:34:01.499069544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-9l5kq,Uid:705e5c5c-1430-4917-b511-364bd6cc7cb4,Namespace:kube-system,Attempt:0,} returns sandbox id \"617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb\"" Apr 17 23:34:01.502055 kubelet[2504]: E0417 23:34:01.501563 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:34:01.508968 containerd[1462]: time="2026-04-17T23:34:01.508722889Z" level=info msg="CreateContainer within sandbox \"617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:34:01.535408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount830160636.mount: Deactivated successfully. Apr 17 23:34:01.545878 containerd[1462]: time="2026-04-17T23:34:01.545249410Z" level=info msg="CreateContainer within sandbox \"617e43d67ff147ca819573b0a020054a6c8464826f294dbbb30d686c077cb0cb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ea156d9b5e9073f2f61b052a6857e26ff608e19c0414d6920b5c4e30e1cb6f9d\"" Apr 17 23:34:01.548904 containerd[1462]: time="2026-04-17T23:34:01.548624388Z" level=info msg="StartContainer for \"ea156d9b5e9073f2f61b052a6857e26ff608e19c0414d6920b5c4e30e1cb6f9d\"" Apr 17 23:34:01.549822 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:34:01.550537 containerd[1462]: time="2026-04-17T23:34:01.549718639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:01.568804 containerd[1462]: time="2026-04-17T23:34:01.549774193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:01.568804 containerd[1462]: time="2026-04-17T23:34:01.568754088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:01.568971 containerd[1462]: time="2026-04-17T23:34:01.568909649Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:01.588575 systemd-networkd[1388]: cali58a13caf563: Link UP Apr 17 23:34:01.588847 systemd-networkd[1388]: cali58a13caf563: Gained carrier Apr 17 23:34:01.612215 systemd[1]: Started cri-containerd-d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d.scope - libcontainer container d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d. Apr 17 23:34:01.629341 systemd[1]: Started cri-containerd-ea156d9b5e9073f2f61b052a6857e26ff608e19c0414d6920b5c4e30e1cb6f9d.scope - libcontainer container ea156d9b5e9073f2f61b052a6857e26ff608e19c0414d6920b5c4e30e1cb6f9d. Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:33:59.768 [ERROR][3457] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:33:59.802 [INFO][3457] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--pfnkx-eth0 coredns-674b8bbfcf- kube-system 72df670f-46f7-4a37-8c88-a75339da8060 845 0 2026-04-17 23:33:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-pfnkx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali58a13caf563 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16" Namespace="kube-system" Pod="coredns-674b8bbfcf-pfnkx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pfnkx-" Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:33:59.802 [INFO][3457] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16" Namespace="kube-system" Pod="coredns-674b8bbfcf-pfnkx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pfnkx-eth0" Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:33:59.970 [INFO][3569] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16" HandleID="k8s-pod-network.eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16" Workload="localhost-k8s-coredns--674b8bbfcf--pfnkx-eth0" Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:33:59.977 [INFO][3569] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16" HandleID="k8s-pod-network.eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16" Workload="localhost-k8s-coredns--674b8bbfcf--pfnkx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003fdb50), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-pfnkx", "timestamp":"2026-04-17 23:33:59.970948869 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00025c420)} Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:33:59.978 [INFO][3569] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:34:01.463 [INFO][3569] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:34:01.468 [INFO][3569] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:34:01.508 [INFO][3569] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16" host="localhost" Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:34:01.522 [INFO][3569] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:34:01.540 [INFO][3569] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:34:01.546 [INFO][3569] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:34:01.553 [INFO][3569] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:34:01.553 [INFO][3569] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16" host="localhost" Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:34:01.558 [INFO][3569] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16 Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:34:01.569 [INFO][3569] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16" host="localhost" Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:34:01.580 [INFO][3569] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16" host="localhost" Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:34:01.580 [INFO][3569] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16" host="localhost" Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:34:01.580 [INFO][3569] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:34:01.636409 containerd[1462]: 2026-04-17 23:34:01.580 [INFO][3569] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16" HandleID="k8s-pod-network.eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16" Workload="localhost-k8s-coredns--674b8bbfcf--pfnkx-eth0" Apr 17 23:34:01.636945 containerd[1462]: 2026-04-17 23:34:01.585 [INFO][3457] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16" Namespace="kube-system" Pod="coredns-674b8bbfcf-pfnkx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pfnkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pfnkx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"72df670f-46f7-4a37-8c88-a75339da8060", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 33, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-pfnkx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58a13caf563", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:34:01.636945 containerd[1462]: 2026-04-17 23:34:01.586 [INFO][3457] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16" Namespace="kube-system" Pod="coredns-674b8bbfcf-pfnkx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pfnkx-eth0" Apr 17 23:34:01.636945 containerd[1462]: 2026-04-17 23:34:01.586 [INFO][3457] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58a13caf563 ContainerID="eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16" Namespace="kube-system" Pod="coredns-674b8bbfcf-pfnkx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pfnkx-eth0" Apr 17 23:34:01.636945 containerd[1462]: 2026-04-17 23:34:01.589 [INFO][3457] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16" Namespace="kube-system" Pod="coredns-674b8bbfcf-pfnkx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pfnkx-eth0" Apr 17 23:34:01.636945 containerd[1462]: 2026-04-17 23:34:01.603 [INFO][3457] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16" Namespace="kube-system" Pod="coredns-674b8bbfcf-pfnkx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pfnkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pfnkx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"72df670f-46f7-4a37-8c88-a75339da8060", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 33, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16", Pod:"coredns-674b8bbfcf-pfnkx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali58a13caf563", MAC:"da:80:76:61:48:36", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:34:01.636945 containerd[1462]: 2026-04-17 23:34:01.628 [INFO][3457] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16" Namespace="kube-system" Pod="coredns-674b8bbfcf-pfnkx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pfnkx-eth0" Apr 17 23:34:01.647451 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:34:01.649343 containerd[1462]: time="2026-04-17T23:34:01.649125191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65dd96956d-nxrcj,Uid:bd369009-82a8-4ff9-89b2-990a5a426bba,Namespace:calico-system,Attempt:0,} returns sandbox id \"273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b\"" Apr 17 23:34:01.661131 kernel: calico-node[3883]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:34:01.667555 containerd[1462]: time="2026-04-17T23:34:01.667511385Z" level=info msg="StartContainer for \"ea156d9b5e9073f2f61b052a6857e26ff608e19c0414d6920b5c4e30e1cb6f9d\" returns successfully" Apr 17 23:34:01.685916 systemd-networkd[1388]: cali18eaeeda3b9: Link UP Apr 17 23:34:01.693200 systemd-networkd[1388]: cali18eaeeda3b9: Gained carrier Apr 17 23:34:01.696035 containerd[1462]: time="2026-04-17T23:34:01.695850385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:01.696035 containerd[1462]: time="2026-04-17T23:34:01.695974008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:01.696203 containerd[1462]: time="2026-04-17T23:34:01.695986093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:01.697235 containerd[1462]: time="2026-04-17T23:34:01.696371472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:00.184 [ERROR][3634] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:00.203 [INFO][3634] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--99tqr-eth0 csi-node-driver- calico-system ca6b2b6e-bb01-4db2-9121-3bab00f81e9d 714 0 2026-04-17 23:33:47 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-99tqr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali18eaeeda3b9 [] [] }} ContainerID="1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98" Namespace="calico-system" Pod="csi-node-driver-99tqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--99tqr-" Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:00.204 [INFO][3634] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98" Namespace="calico-system" Pod="csi-node-driver-99tqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--99tqr-eth0" Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:00.243 [INFO][3647] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98" HandleID="k8s-pod-network.1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98" Workload="localhost-k8s-csi--node--driver--99tqr-eth0" Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:00.252 [INFO][3647] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98" HandleID="k8s-pod-network.1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98" Workload="localhost-k8s-csi--node--driver--99tqr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000c1980), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-99tqr", "timestamp":"2026-04-17 23:34:00.243317288 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001c4f20)} Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:00.252 [INFO][3647] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:01.582 [INFO][3647] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:01.582 [INFO][3647] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:01.622 [INFO][3647] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98" host="localhost" Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:01.632 [INFO][3647] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:01.641 [INFO][3647] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:01.645 [INFO][3647] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:01.649 [INFO][3647] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:01.650 [INFO][3647] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98" host="localhost" Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:01.654 [INFO][3647] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98 Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:01.663 [INFO][3647] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98" host="localhost" Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:01.677 [INFO][3647] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98" host="localhost" Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:01.678 [INFO][3647] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98" host="localhost" Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:01.678 [INFO][3647] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:34:01.722330 containerd[1462]: 2026-04-17 23:34:01.678 [INFO][3647] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98" HandleID="k8s-pod-network.1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98" Workload="localhost-k8s-csi--node--driver--99tqr-eth0" Apr 17 23:34:01.723875 containerd[1462]: 2026-04-17 23:34:01.682 [INFO][3634] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98" Namespace="calico-system" Pod="csi-node-driver-99tqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--99tqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--99tqr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca6b2b6e-bb01-4db2-9121-3bab00f81e9d", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-99tqr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18eaeeda3b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:34:01.723875 containerd[1462]: 2026-04-17 23:34:01.682 [INFO][3634] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98" Namespace="calico-system" Pod="csi-node-driver-99tqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--99tqr-eth0" Apr 17 23:34:01.723875 containerd[1462]: 2026-04-17 23:34:01.682 [INFO][3634] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18eaeeda3b9 ContainerID="1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98" Namespace="calico-system" Pod="csi-node-driver-99tqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--99tqr-eth0" Apr 17 23:34:01.723875 containerd[1462]: 2026-04-17 23:34:01.693 [INFO][3634] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98" Namespace="calico-system" Pod="csi-node-driver-99tqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--99tqr-eth0" Apr 17 23:34:01.723875 containerd[1462]: 2026-04-17 23:34:01.694 [INFO][3634] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98" Namespace="calico-system" Pod="csi-node-driver-99tqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--99tqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--99tqr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ca6b2b6e-bb01-4db2-9121-3bab00f81e9d", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98", Pod:"csi-node-driver-99tqr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18eaeeda3b9", MAC:"2e:dd:da:35:f2:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:34:01.723875 containerd[1462]: 2026-04-17 23:34:01.716 [INFO][3634] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98" Namespace="calico-system" Pod="csi-node-driver-99tqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--99tqr-eth0" Apr 17 23:34:01.730393 containerd[1462]: time="2026-04-17T23:34:01.729990122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-59fdff4ffb-2l7pk,Uid:4dd72ea7-b70c-42f1-9ae1-4082b989f41b,Namespace:calico-system,Attempt:0,} returns sandbox id \"d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d\"" Apr 17 23:34:01.763275 systemd[1]: Started cri-containerd-eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16.scope - libcontainer container eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16. Apr 17 23:34:01.784041 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:34:01.785306 containerd[1462]: time="2026-04-17T23:34:01.784831602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:01.785306 containerd[1462]: time="2026-04-17T23:34:01.784947021Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:01.785306 containerd[1462]: time="2026-04-17T23:34:01.784958569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:01.785306 containerd[1462]: time="2026-04-17T23:34:01.785122021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:01.828234 systemd[1]: Started cri-containerd-1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98.scope - libcontainer container 1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98. Apr 17 23:34:01.874061 containerd[1462]: time="2026-04-17T23:34:01.873958977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pfnkx,Uid:72df670f-46f7-4a37-8c88-a75339da8060,Namespace:kube-system,Attempt:0,} returns sandbox id \"eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16\"" Apr 17 23:34:01.878626 kubelet[2504]: E0417 23:34:01.878481 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:34:01.883879 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:34:01.894051 containerd[1462]: time="2026-04-17T23:34:01.893975413Z" level=info msg="CreateContainer within sandbox \"eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:34:01.913262 containerd[1462]: time="2026-04-17T23:34:01.912663889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-99tqr,Uid:ca6b2b6e-bb01-4db2-9121-3bab00f81e9d,Namespace:calico-system,Attempt:0,} returns sandbox id \"1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98\"" Apr 17 23:34:01.927109 containerd[1462]: time="2026-04-17T23:34:01.926897303Z" level=info msg="CreateContainer within sandbox \"eaa053fe94f2009fb57ba3a65e9e8524e62ca5ec5b6512213e64045b174f8e16\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"93e1c7e176cb0732e89add42c7df65a26489f38f70bbffb912ebe20936a0113d\"" Apr 17 23:34:01.927856 containerd[1462]: time="2026-04-17T23:34:01.927834491Z" level=info msg="StartContainer for \"93e1c7e176cb0732e89add42c7df65a26489f38f70bbffb912ebe20936a0113d\"" Apr 17 23:34:01.955195 systemd[1]: Started cri-containerd-93e1c7e176cb0732e89add42c7df65a26489f38f70bbffb912ebe20936a0113d.scope - libcontainer container 93e1c7e176cb0732e89add42c7df65a26489f38f70bbffb912ebe20936a0113d. Apr 17 23:34:01.992041 containerd[1462]: time="2026-04-17T23:34:01.991970033Z" level=info msg="StartContainer for \"93e1c7e176cb0732e89add42c7df65a26489f38f70bbffb912ebe20936a0113d\" returns successfully" Apr 17 23:34:02.180592 systemd-networkd[1388]: vxlan.calico: Link UP Apr 17 23:34:02.180608 systemd-networkd[1388]: vxlan.calico: Gained carrier Apr 17 23:34:02.249540 kubelet[2504]: E0417 23:34:02.249464 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:34:02.259231 kubelet[2504]: E0417 23:34:02.259174 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:34:02.284485 kubelet[2504]: I0417 23:34:02.284393 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-9l5kq" podStartSLOduration=26.284374433 podStartE2EDuration="26.284374433s" podCreationTimestamp="2026-04-17 23:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:34:02.267582122 +0000 UTC m=+33.216855726" watchObservedRunningTime="2026-04-17 23:34:02.284374433 +0000 UTC m=+33.233648043" Apr 17 23:34:02.310971 kubelet[2504]: I0417 23:34:02.310916 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pfnkx" podStartSLOduration=26.310895477 podStartE2EDuration="26.310895477s" podCreationTimestamp="2026-04-17 23:33:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:34:02.308541191 +0000 UTC m=+33.257814792" watchObservedRunningTime="2026-04-17 23:34:02.310895477 +0000 UTC m=+33.260169161" Apr 17 23:34:02.328329 systemd-networkd[1388]: calia3c9a76f7c5: Gained IPv6LL Apr 17 23:34:02.329258 systemd-networkd[1388]: cali237b5786835: Gained IPv6LL Apr 17 23:34:02.713147 systemd-networkd[1388]: cali898204fe1f7: Gained IPv6LL Apr 17 23:34:02.776217 systemd-networkd[1388]: cali5b7c7cac752: Gained IPv6LL Apr 17 23:34:02.906493 systemd-networkd[1388]: cali58a13caf563: Gained IPv6LL Apr 17 23:34:02.970051 systemd-networkd[1388]: calif457a82d6d5: Gained IPv6LL Apr 17 23:34:02.970579 systemd-networkd[1388]: calia1167ea1e24: Gained IPv6LL Apr 17 23:34:03.169283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1975947259.mount: Deactivated successfully. Apr 17 23:34:03.260479 kubelet[2504]: E0417 23:34:03.260454 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:34:03.260764 kubelet[2504]: E0417 23:34:03.260537 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:34:03.471176 containerd[1462]: time="2026-04-17T23:34:03.471097867Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:03.471976 containerd[1462]: time="2026-04-17T23:34:03.471908184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 17 23:34:03.473099 containerd[1462]: time="2026-04-17T23:34:03.473051253Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:03.475325 containerd[1462]: time="2026-04-17T23:34:03.475269149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:03.475821 containerd[1462]: time="2026-04-17T23:34:03.475788770Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.19763217s" Apr 17 23:34:03.475878 containerd[1462]: time="2026-04-17T23:34:03.475822549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 17 23:34:03.476931 containerd[1462]: time="2026-04-17T23:34:03.476908858Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:34:03.480854 containerd[1462]: time="2026-04-17T23:34:03.480818823Z" level=info msg="CreateContainer within sandbox \"4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:34:03.493910 containerd[1462]: time="2026-04-17T23:34:03.493863841Z" level=info msg="CreateContainer within sandbox \"4a747cf44b35a2c95941ef2a0ada85c33b252bee0d5e2c866bc3e1e01258ac63\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f93be16ec749b5b54a7e03384b764b82f67fad41cbb73ef9489d942bae450bce\"" Apr 17 23:34:03.494482 containerd[1462]: time="2026-04-17T23:34:03.494463784Z" level=info msg="StartContainer for \"f93be16ec749b5b54a7e03384b764b82f67fad41cbb73ef9489d942bae450bce\"" Apr 17 23:34:03.526329 systemd[1]: Started cri-containerd-f93be16ec749b5b54a7e03384b764b82f67fad41cbb73ef9489d942bae450bce.scope - libcontainer container f93be16ec749b5b54a7e03384b764b82f67fad41cbb73ef9489d942bae450bce. Apr 17 23:34:03.562907 containerd[1462]: time="2026-04-17T23:34:03.562823448Z" level=info msg="StartContainer for \"f93be16ec749b5b54a7e03384b764b82f67fad41cbb73ef9489d942bae450bce\" returns successfully" Apr 17 23:34:03.736277 systemd-networkd[1388]: cali18eaeeda3b9: Gained IPv6LL Apr 17 23:34:03.928298 systemd-networkd[1388]: vxlan.calico: Gained IPv6LL Apr 17 23:34:04.263981 kubelet[2504]: E0417 23:34:04.263948 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:34:04.264392 kubelet[2504]: E0417 23:34:04.264054 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 17 23:34:04.276116 kubelet[2504]: I0417 23:34:04.275824 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-wknrq" podStartSLOduration=15.075466923 podStartE2EDuration="17.275811594s" podCreationTimestamp="2026-04-17 23:33:47 +0000 UTC" firstStartedPulling="2026-04-17 23:34:01.276402922 +0000 UTC m=+32.225676523" lastFinishedPulling="2026-04-17 23:34:03.476747593 +0000 UTC m=+34.426021194" observedRunningTime="2026-04-17 23:34:04.275541279 +0000 UTC m=+35.224814887" watchObservedRunningTime="2026-04-17 23:34:04.275811594 +0000 UTC m=+35.225085203" Apr 17 23:34:05.316730 systemd[1]: run-containerd-runc-k8s.io-f93be16ec749b5b54a7e03384b764b82f67fad41cbb73ef9489d942bae450bce-runc.xypUFX.mount: Deactivated successfully. Apr 17 23:34:05.427977 containerd[1462]: time="2026-04-17T23:34:05.427873609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:05.428498 containerd[1462]: time="2026-04-17T23:34:05.428454947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 17 23:34:05.429719 containerd[1462]: time="2026-04-17T23:34:05.429678263Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:05.432393 containerd[1462]: time="2026-04-17T23:34:05.432345492Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:05.433068 containerd[1462]: time="2026-04-17T23:34:05.433040214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 1.956106446s" Apr 17 23:34:05.433124 containerd[1462]: time="2026-04-17T23:34:05.433070825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:34:05.434307 containerd[1462]: time="2026-04-17T23:34:05.434236416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:34:05.436865 containerd[1462]: time="2026-04-17T23:34:05.436824955Z" level=info msg="CreateContainer within sandbox \"f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:34:05.454849 containerd[1462]: time="2026-04-17T23:34:05.454799786Z" level=info msg="CreateContainer within sandbox \"f5b3279947543ae4247ce03d8857453e978cf7940281044555188b329f8f3a5e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9f1cba3592222e965edb8a415b782a607cc938d2b9d02897442f2e589f89bb38\"" Apr 17 23:34:05.456232 containerd[1462]: time="2026-04-17T23:34:05.455361266Z" level=info msg="StartContainer for \"9f1cba3592222e965edb8a415b782a607cc938d2b9d02897442f2e589f89bb38\"" Apr 17 23:34:05.501207 systemd[1]: Started cri-containerd-9f1cba3592222e965edb8a415b782a607cc938d2b9d02897442f2e589f89bb38.scope - libcontainer container 9f1cba3592222e965edb8a415b782a607cc938d2b9d02897442f2e589f89bb38. Apr 17 23:34:05.538382 containerd[1462]: time="2026-04-17T23:34:05.538331867Z" level=info msg="StartContainer for \"9f1cba3592222e965edb8a415b782a607cc938d2b9d02897442f2e589f89bb38\" returns successfully" Apr 17 23:34:06.700854 kubelet[2504]: I0417 23:34:06.700789 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-59fdff4ffb-9pwcd" podStartSLOduration=15.594482755 podStartE2EDuration="19.700771494s" podCreationTimestamp="2026-04-17 23:33:47 +0000 UTC" firstStartedPulling="2026-04-17 23:34:01.327526235 +0000 UTC m=+32.276799835" lastFinishedPulling="2026-04-17 23:34:05.433814968 +0000 UTC m=+36.383088574" observedRunningTime="2026-04-17 23:34:06.304198773 +0000 UTC m=+37.253472385" watchObservedRunningTime="2026-04-17 23:34:06.700771494 +0000 UTC m=+37.650045106" Apr 17 23:34:08.408419 containerd[1462]: time="2026-04-17T23:34:08.408354972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:08.409266 containerd[1462]: time="2026-04-17T23:34:08.409218537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 17 23:34:08.418079 containerd[1462]: time="2026-04-17T23:34:08.418032292Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:08.421502 containerd[1462]: time="2026-04-17T23:34:08.421447129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:08.422122 containerd[1462]: time="2026-04-17T23:34:08.422073333Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 2.987812127s" Apr 17 23:34:08.422122 containerd[1462]: time="2026-04-17T23:34:08.422110094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 17 23:34:08.423054 containerd[1462]: time="2026-04-17T23:34:08.423029461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:34:08.432928 containerd[1462]: time="2026-04-17T23:34:08.432873339Z" level=info msg="CreateContainer within sandbox \"1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:34:08.446868 containerd[1462]: time="2026-04-17T23:34:08.446817535Z" level=info msg="CreateContainer within sandbox \"1589454416a3d0500392c5271a035d1e83c6f4ceeb1c9251bf9124f4f3a7fc2d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"de1d02fe126799e4d3557b3143aef450bfd70be3180f3a3707caffecbe528aa0\"" Apr 17 23:34:08.447564 containerd[1462]: time="2026-04-17T23:34:08.447532231Z" level=info msg="StartContainer for \"de1d02fe126799e4d3557b3143aef450bfd70be3180f3a3707caffecbe528aa0\"" Apr 17 23:34:08.532270 systemd[1]: Started cri-containerd-de1d02fe126799e4d3557b3143aef450bfd70be3180f3a3707caffecbe528aa0.scope - libcontainer container de1d02fe126799e4d3557b3143aef450bfd70be3180f3a3707caffecbe528aa0. Apr 17 23:34:08.571224 containerd[1462]: time="2026-04-17T23:34:08.571169780Z" level=info msg="StartContainer for \"de1d02fe126799e4d3557b3143aef450bfd70be3180f3a3707caffecbe528aa0\" returns successfully" Apr 17 23:34:09.305422 kubelet[2504]: I0417 23:34:09.304894 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7c59d9f498-7krgp" podStartSLOduration=14.2916656 podStartE2EDuration="21.30488072s" podCreationTimestamp="2026-04-17 23:33:48 +0000 UTC" firstStartedPulling="2026-04-17 23:34:01.409661692 +0000 UTC m=+32.358935298" lastFinishedPulling="2026-04-17 23:34:08.422876809 +0000 UTC m=+39.372150418" observedRunningTime="2026-04-17 23:34:09.304582066 +0000 UTC m=+40.253855692" watchObservedRunningTime="2026-04-17 23:34:09.30488072 +0000 UTC m=+40.254154332" Apr 17 23:34:09.903115 containerd[1462]: time="2026-04-17T23:34:09.903039248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:09.903863 containerd[1462]: time="2026-04-17T23:34:09.903815645Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 17 23:34:09.905183 containerd[1462]: time="2026-04-17T23:34:09.905132930Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:09.907545 containerd[1462]: time="2026-04-17T23:34:09.907490944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:09.908384 containerd[1462]: time="2026-04-17T23:34:09.908344844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.48528616s" Apr 17 23:34:09.908415 containerd[1462]: time="2026-04-17T23:34:09.908383018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 17 23:34:09.909498 containerd[1462]: time="2026-04-17T23:34:09.909335654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:34:09.913899 containerd[1462]: time="2026-04-17T23:34:09.913713635Z" level=info msg="CreateContainer within sandbox \"273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:34:09.930813 containerd[1462]: time="2026-04-17T23:34:09.930730407Z" level=info msg="CreateContainer within sandbox \"273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3\"" Apr 17 23:34:09.931484 containerd[1462]: time="2026-04-17T23:34:09.931457750Z" level=info msg="StartContainer for \"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3\"" Apr 17 23:34:09.982370 systemd[1]: run-containerd-runc-k8s.io-d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3-runc.sphkct.mount: Deactivated successfully. Apr 17 23:34:09.991209 systemd[1]: Started cri-containerd-d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3.scope - libcontainer container d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3. Apr 17 23:34:10.029425 containerd[1462]: time="2026-04-17T23:34:10.029377173Z" level=info msg="StartContainer for \"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3\" returns successfully" Apr 17 23:34:10.307029 containerd[1462]: time="2026-04-17T23:34:10.306922858Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:10.307872 containerd[1462]: time="2026-04-17T23:34:10.307819735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:34:10.310243 containerd[1462]: time="2026-04-17T23:34:10.310201988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 400.841196ms" Apr 17 23:34:10.310243 containerd[1462]: time="2026-04-17T23:34:10.310234808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:34:10.311435 containerd[1462]: time="2026-04-17T23:34:10.311391716Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:34:10.314877 containerd[1462]: time="2026-04-17T23:34:10.314839835Z" level=info msg="CreateContainer within sandbox \"d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:34:10.372956 containerd[1462]: time="2026-04-17T23:34:10.372686620Z" level=info msg="CreateContainer within sandbox \"d804b0ccf0f192d2f3f3b0c632aea75ad1780932f9ebbf5b5b94622f72b4c46d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e0da5d8877c9f9501e613f33221dbabb31c8510375e4047130759bc7a2fad08c\"" Apr 17 23:34:10.373873 containerd[1462]: time="2026-04-17T23:34:10.373839295Z" level=info msg="StartContainer for \"e0da5d8877c9f9501e613f33221dbabb31c8510375e4047130759bc7a2fad08c\"" Apr 17 23:34:10.407166 systemd[1]: Started cri-containerd-e0da5d8877c9f9501e613f33221dbabb31c8510375e4047130759bc7a2fad08c.scope - libcontainer container e0da5d8877c9f9501e613f33221dbabb31c8510375e4047130759bc7a2fad08c. Apr 17 23:34:10.453397 systemd[1]: Started sshd@7-10.0.0.43:22-10.0.0.1:58764.service - OpenSSH per-connection server daemon (10.0.0.1:58764). Apr 17 23:34:10.455437 containerd[1462]: time="2026-04-17T23:34:10.455264837Z" level=info msg="StartContainer for \"e0da5d8877c9f9501e613f33221dbabb31c8510375e4047130759bc7a2fad08c\" returns successfully" Apr 17 23:34:10.500112 sshd[4713]: Accepted publickey for core from 10.0.0.1 port 58764 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:34:10.501797 sshd[4713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:10.506212 systemd-logind[1440]: New session 8 of user core. Apr 17 23:34:10.514177 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:34:10.796745 sshd[4713]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:10.799651 systemd[1]: sshd@7-10.0.0.43:22-10.0.0.1:58764.service: Deactivated successfully. Apr 17 23:34:10.801260 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:34:10.801841 systemd-logind[1440]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:34:10.802708 systemd-logind[1440]: Removed session 8. Apr 17 23:34:11.297829 kubelet[2504]: I0417 23:34:11.297756 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-59fdff4ffb-2l7pk" podStartSLOduration=15.717805326 podStartE2EDuration="24.297741591s" podCreationTimestamp="2026-04-17 23:33:47 +0000 UTC" firstStartedPulling="2026-04-17 23:34:01.731276994 +0000 UTC m=+32.680550596" lastFinishedPulling="2026-04-17 23:34:10.311213255 +0000 UTC m=+41.260486861" observedRunningTime="2026-04-17 23:34:11.297539332 +0000 UTC m=+42.246812946" watchObservedRunningTime="2026-04-17 23:34:11.297741591 +0000 UTC m=+42.247015200" Apr 17 23:34:11.698048 containerd[1462]: time="2026-04-17T23:34:11.697853117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:11.698903 containerd[1462]: time="2026-04-17T23:34:11.698850596Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 17 23:34:11.704575 containerd[1462]: time="2026-04-17T23:34:11.704531949Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:11.707308 containerd[1462]: time="2026-04-17T23:34:11.707240138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:11.707723 containerd[1462]: time="2026-04-17T23:34:11.707653889Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.396224139s" Apr 17 23:34:11.707723 containerd[1462]: time="2026-04-17T23:34:11.707698028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 17 23:34:11.708599 containerd[1462]: time="2026-04-17T23:34:11.708573193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:34:11.715381 containerd[1462]: time="2026-04-17T23:34:11.715328020Z" level=info msg="CreateContainer within sandbox \"1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:34:11.737816 containerd[1462]: time="2026-04-17T23:34:11.737774804Z" level=info msg="CreateContainer within sandbox \"1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e973c12fe2bae33a0c30775d776755ec0b5766b0ad69207f0af89791a70314a3\"" Apr 17 23:34:11.738578 containerd[1462]: time="2026-04-17T23:34:11.738548048Z" level=info msg="StartContainer for \"e973c12fe2bae33a0c30775d776755ec0b5766b0ad69207f0af89791a70314a3\"" Apr 17 23:34:11.769082 systemd[1]: run-containerd-runc-k8s.io-e973c12fe2bae33a0c30775d776755ec0b5766b0ad69207f0af89791a70314a3-runc.ixVNfa.mount: Deactivated successfully. Apr 17 23:34:11.779201 systemd[1]: Started cri-containerd-e973c12fe2bae33a0c30775d776755ec0b5766b0ad69207f0af89791a70314a3.scope - libcontainer container e973c12fe2bae33a0c30775d776755ec0b5766b0ad69207f0af89791a70314a3. Apr 17 23:34:11.803068 containerd[1462]: time="2026-04-17T23:34:11.803034612Z" level=info msg="StartContainer for \"e973c12fe2bae33a0c30775d776755ec0b5766b0ad69207f0af89791a70314a3\" returns successfully" Apr 17 23:34:12.290092 kubelet[2504]: I0417 23:34:12.290055 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:34:13.208793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2062379407.mount: Deactivated successfully. Apr 17 23:34:13.230520 containerd[1462]: time="2026-04-17T23:34:13.230460251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:13.231711 containerd[1462]: time="2026-04-17T23:34:13.231635399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 17 23:34:13.233288 containerd[1462]: time="2026-04-17T23:34:13.233237039Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:13.236084 containerd[1462]: time="2026-04-17T23:34:13.236035166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:13.237367 containerd[1462]: time="2026-04-17T23:34:13.237301700Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.528689555s" Apr 17 23:34:13.237367 containerd[1462]: time="2026-04-17T23:34:13.237358187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 17 23:34:13.238901 containerd[1462]: time="2026-04-17T23:34:13.238613002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:34:13.241647 containerd[1462]: time="2026-04-17T23:34:13.241583248Z" level=info msg="CreateContainer within sandbox \"273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:34:13.257810 containerd[1462]: time="2026-04-17T23:34:13.257634287Z" level=info msg="CreateContainer within sandbox \"273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f\"" Apr 17 23:34:13.259793 containerd[1462]: time="2026-04-17T23:34:13.258790368Z" level=info msg="StartContainer for \"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f\"" Apr 17 23:34:13.294216 systemd[1]: Started cri-containerd-be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f.scope - libcontainer container be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f. Apr 17 23:34:13.338355 containerd[1462]: time="2026-04-17T23:34:13.338284243Z" level=info msg="StartContainer for \"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f\" returns successfully" Apr 17 23:34:13.922291 kubelet[2504]: I0417 23:34:13.922222 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:34:14.312640 kubelet[2504]: I0417 23:34:14.311709 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-65dd96956d-nxrcj" podStartSLOduration=12.724270204 podStartE2EDuration="24.311664918s" podCreationTimestamp="2026-04-17 23:33:50 +0000 UTC" firstStartedPulling="2026-04-17 23:34:01.651091577 +0000 UTC m=+32.600365187" lastFinishedPulling="2026-04-17 23:34:13.238486292 +0000 UTC m=+44.187759901" observedRunningTime="2026-04-17 23:34:14.311294211 +0000 UTC m=+45.260567827" watchObservedRunningTime="2026-04-17 23:34:14.311664918 +0000 UTC m=+45.260938553" Apr 17 23:34:14.324566 containerd[1462]: time="2026-04-17T23:34:14.324507962Z" level=info msg="StopContainer for \"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3\" with timeout 30 (s)" Apr 17 23:34:14.328518 containerd[1462]: time="2026-04-17T23:34:14.328483771Z" level=info msg="Stop container \"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3\" with signal terminated" Apr 17 23:34:14.328875 containerd[1462]: time="2026-04-17T23:34:14.328836172Z" level=info msg="StopContainer for \"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f\" with timeout 30 (s)" Apr 17 23:34:14.331444 containerd[1462]: time="2026-04-17T23:34:14.331410597Z" level=info msg="Stop container \"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f\" with signal terminated" Apr 17 23:34:14.337805 systemd[1]: cri-containerd-be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f.scope: Deactivated successfully. Apr 17 23:34:14.352314 systemd[1]: cri-containerd-d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3.scope: Deactivated successfully. Apr 17 23:34:14.364340 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f-rootfs.mount: Deactivated successfully. Apr 17 23:34:14.374917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3-rootfs.mount: Deactivated successfully. Apr 17 23:34:14.393659 containerd[1462]: time="2026-04-17T23:34:14.374255771Z" level=info msg="shim disconnected" id=d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3 namespace=k8s.io Apr 17 23:34:14.393659 containerd[1462]: time="2026-04-17T23:34:14.393656999Z" level=warning msg="cleaning up after shim disconnected" id=d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3 namespace=k8s.io Apr 17 23:34:14.393898 containerd[1462]: time="2026-04-17T23:34:14.393701328Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:34:14.408150 containerd[1462]: time="2026-04-17T23:34:14.407909226Z" level=info msg="shim disconnected" id=be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f namespace=k8s.io Apr 17 23:34:14.408331 containerd[1462]: time="2026-04-17T23:34:14.408215882Z" level=warning msg="cleaning up after shim disconnected" id=be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f namespace=k8s.io Apr 17 23:34:14.408331 containerd[1462]: time="2026-04-17T23:34:14.408234003Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:34:14.421254 containerd[1462]: time="2026-04-17T23:34:14.421159405Z" level=info msg="StopContainer for \"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3\" returns successfully" Apr 17 23:34:14.424036 containerd[1462]: time="2026-04-17T23:34:14.423974649Z" level=info msg="StopContainer for \"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f\" returns successfully" Apr 17 23:34:14.439205 containerd[1462]: time="2026-04-17T23:34:14.439158978Z" level=info msg="StopPodSandbox for \"273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b\"" Apr 17 23:34:14.439267 containerd[1462]: time="2026-04-17T23:34:14.439209323Z" level=info msg="Container to stop \"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:34:14.439267 containerd[1462]: time="2026-04-17T23:34:14.439219113Z" level=info msg="Container to stop \"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:34:14.441243 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b-shm.mount: Deactivated successfully. Apr 17 23:34:14.444246 systemd[1]: cri-containerd-273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b.scope: Deactivated successfully. Apr 17 23:34:14.460261 containerd[1462]: time="2026-04-17T23:34:14.460199367Z" level=info msg="shim disconnected" id=273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b namespace=k8s.io Apr 17 23:34:14.460261 containerd[1462]: time="2026-04-17T23:34:14.460249458Z" level=warning msg="cleaning up after shim disconnected" id=273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b namespace=k8s.io Apr 17 23:34:14.460261 containerd[1462]: time="2026-04-17T23:34:14.460256284Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:34:14.461495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b-rootfs.mount: Deactivated successfully. Apr 17 23:34:14.535126 systemd-networkd[1388]: calia1167ea1e24: Link DOWN Apr 17 23:34:14.535134 systemd-networkd[1388]: calia1167ea1e24: Lost carrier Apr 17 23:34:14.644182 containerd[1462]: 2026-04-17 23:34:14.532 [INFO][4981] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Apr 17 23:34:14.644182 containerd[1462]: 2026-04-17 23:34:14.533 [INFO][4981] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" iface="eth0" netns="/var/run/netns/cni-fb1f5461-5556-7f56-d05d-4d86de516129" Apr 17 23:34:14.644182 containerd[1462]: 2026-04-17 23:34:14.534 [INFO][4981] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" iface="eth0" netns="/var/run/netns/cni-fb1f5461-5556-7f56-d05d-4d86de516129" Apr 17 23:34:14.644182 containerd[1462]: 2026-04-17 23:34:14.546 [INFO][4981] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" after=12.336557ms iface="eth0" netns="/var/run/netns/cni-fb1f5461-5556-7f56-d05d-4d86de516129" Apr 17 23:34:14.644182 containerd[1462]: 2026-04-17 23:34:14.546 [INFO][4981] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Apr 17 23:34:14.644182 containerd[1462]: 2026-04-17 23:34:14.546 [INFO][4981] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Apr 17 23:34:14.644182 containerd[1462]: 2026-04-17 23:34:14.595 [INFO][4999] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" HandleID="k8s-pod-network.273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Workload="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" Apr 17 23:34:14.644182 containerd[1462]: 2026-04-17 23:34:14.595 [INFO][4999] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:34:14.644182 containerd[1462]: 2026-04-17 23:34:14.595 [INFO][4999] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:34:14.644182 containerd[1462]: 2026-04-17 23:34:14.635 [INFO][4999] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" HandleID="k8s-pod-network.273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Workload="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" Apr 17 23:34:14.644182 containerd[1462]: 2026-04-17 23:34:14.635 [INFO][4999] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" HandleID="k8s-pod-network.273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Workload="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" Apr 17 23:34:14.644182 containerd[1462]: 2026-04-17 23:34:14.638 [INFO][4999] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:34:14.644182 containerd[1462]: 2026-04-17 23:34:14.641 [INFO][4981] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Apr 17 23:34:14.645637 containerd[1462]: time="2026-04-17T23:34:14.644434771Z" level=info msg="TearDown network for sandbox \"273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b\" successfully" Apr 17 23:34:14.645637 containerd[1462]: time="2026-04-17T23:34:14.644454833Z" level=info msg="StopPodSandbox for \"273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b\" returns successfully" Apr 17 23:34:14.723174 containerd[1462]: time="2026-04-17T23:34:14.723104216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:14.723763 containerd[1462]: time="2026-04-17T23:34:14.723720692Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 17 23:34:14.724694 containerd[1462]: time="2026-04-17T23:34:14.724643016Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:14.728862 containerd[1462]: time="2026-04-17T23:34:14.726562232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:34:14.728978 containerd[1462]: time="2026-04-17T23:34:14.727110856Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.48847323s" Apr 17 23:34:14.729043 containerd[1462]: time="2026-04-17T23:34:14.728986225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 17 23:34:14.732536 containerd[1462]: time="2026-04-17T23:34:14.732484703Z" level=info msg="CreateContainer within sandbox \"1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:34:14.745028 containerd[1462]: time="2026-04-17T23:34:14.744846733Z" level=info msg="CreateContainer within sandbox \"1794b4b3f5954f8de5009d1252bba3e288ec5055439ec4adf36e18614e91ed98\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"65dae51fb7010c7323b848cff54118176936c47e34e43db810cd362ac540d7ee\"" Apr 17 23:34:14.745820 containerd[1462]: time="2026-04-17T23:34:14.745621403Z" level=info msg="StartContainer for \"65dae51fb7010c7323b848cff54118176936c47e34e43db810cd362ac540d7ee\"" Apr 17 23:34:14.771199 systemd[1]: Started cri-containerd-65dae51fb7010c7323b848cff54118176936c47e34e43db810cd362ac540d7ee.scope - libcontainer container 65dae51fb7010c7323b848cff54118176936c47e34e43db810cd362ac540d7ee. Apr 17 23:34:14.782765 kubelet[2504]: I0417 23:34:14.782724 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bd369009-82a8-4ff9-89b2-990a5a426bba-whisker-backend-key-pair\") pod \"bd369009-82a8-4ff9-89b2-990a5a426bba\" (UID: \"bd369009-82a8-4ff9-89b2-990a5a426bba\") " Apr 17 23:34:14.788555 kubelet[2504]: I0417 23:34:14.788508 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/bd369009-82a8-4ff9-89b2-990a5a426bba-nginx-config\") pod \"bd369009-82a8-4ff9-89b2-990a5a426bba\" (UID: \"bd369009-82a8-4ff9-89b2-990a5a426bba\") " Apr 17 23:34:14.788743 kubelet[2504]: I0417 23:34:14.788623 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzngf\" (UniqueName: \"kubernetes.io/projected/bd369009-82a8-4ff9-89b2-990a5a426bba-kube-api-access-lzngf\") pod \"bd369009-82a8-4ff9-89b2-990a5a426bba\" (UID: \"bd369009-82a8-4ff9-89b2-990a5a426bba\") " Apr 17 23:34:14.788743 kubelet[2504]: I0417 23:34:14.788692 2504 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd369009-82a8-4ff9-89b2-990a5a426bba-whisker-ca-bundle\") pod \"bd369009-82a8-4ff9-89b2-990a5a426bba\" (UID: \"bd369009-82a8-4ff9-89b2-990a5a426bba\") " Apr 17 23:34:14.792496 kubelet[2504]: I0417 23:34:14.791406 2504 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd369009-82a8-4ff9-89b2-990a5a426bba-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "bd369009-82a8-4ff9-89b2-990a5a426bba" (UID: "bd369009-82a8-4ff9-89b2-990a5a426bba"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:34:14.792496 kubelet[2504]: I0417 23:34:14.792097 2504 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd369009-82a8-4ff9-89b2-990a5a426bba-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "bd369009-82a8-4ff9-89b2-990a5a426bba" (UID: "bd369009-82a8-4ff9-89b2-990a5a426bba"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:34:14.792748 kubelet[2504]: I0417 23:34:14.792700 2504 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd369009-82a8-4ff9-89b2-990a5a426bba-kube-api-access-lzngf" (OuterVolumeSpecName: "kube-api-access-lzngf") pod "bd369009-82a8-4ff9-89b2-990a5a426bba" (UID: "bd369009-82a8-4ff9-89b2-990a5a426bba"). InnerVolumeSpecName "kube-api-access-lzngf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:34:14.792748 kubelet[2504]: I0417 23:34:14.791170 2504 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd369009-82a8-4ff9-89b2-990a5a426bba-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "bd369009-82a8-4ff9-89b2-990a5a426bba" (UID: "bd369009-82a8-4ff9-89b2-990a5a426bba"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:34:14.799244 containerd[1462]: time="2026-04-17T23:34:14.799203394Z" level=info msg="StartContainer for \"65dae51fb7010c7323b848cff54118176936c47e34e43db810cd362ac540d7ee\" returns successfully" Apr 17 23:34:14.889190 kubelet[2504]: I0417 23:34:14.889150 2504 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lzngf\" (UniqueName: \"kubernetes.io/projected/bd369009-82a8-4ff9-89b2-990a5a426bba-kube-api-access-lzngf\") on node \"localhost\" DevicePath \"\"" Apr 17 23:34:14.889190 kubelet[2504]: I0417 23:34:14.889179 2504 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd369009-82a8-4ff9-89b2-990a5a426bba-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 17 23:34:14.889190 kubelet[2504]: I0417 23:34:14.889186 2504 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/bd369009-82a8-4ff9-89b2-990a5a426bba-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 17 23:34:14.889190 kubelet[2504]: I0417 23:34:14.889195 2504 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/bd369009-82a8-4ff9-89b2-990a5a426bba-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 17 23:34:15.070398 systemd[1]: run-netns-cni\x2dfb1f5461\x2d5556\x2d7f56\x2dd05d\x2d4d86de516129.mount: Deactivated successfully. Apr 17 23:34:15.070483 systemd[1]: var-lib-kubelet-pods-bd369009\x2d82a8\x2d4ff9\x2d89b2\x2d990a5a426bba-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlzngf.mount: Deactivated successfully. Apr 17 23:34:15.070535 systemd[1]: var-lib-kubelet-pods-bd369009\x2d82a8\x2d4ff9\x2d89b2\x2d990a5a426bba-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:34:15.163794 systemd[1]: Removed slice kubepods-besteffort-podbd369009_82a8_4ff9_89b2_990a5a426bba.slice - libcontainer container kubepods-besteffort-podbd369009_82a8_4ff9_89b2_990a5a426bba.slice. Apr 17 23:34:15.312109 kubelet[2504]: I0417 23:34:15.312055 2504 scope.go:117] "RemoveContainer" containerID="be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f" Apr 17 23:34:15.327065 containerd[1462]: time="2026-04-17T23:34:15.326898887Z" level=info msg="RemoveContainer for \"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f\"" Apr 17 23:34:15.327861 kubelet[2504]: I0417 23:34:15.327737 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-99tqr" podStartSLOduration=15.516925621 podStartE2EDuration="28.32772058s" podCreationTimestamp="2026-04-17 23:33:47 +0000 UTC" firstStartedPulling="2026-04-17 23:34:01.918858379 +0000 UTC m=+32.868131980" lastFinishedPulling="2026-04-17 23:34:14.729653333 +0000 UTC m=+45.678926939" observedRunningTime="2026-04-17 23:34:15.325601994 +0000 UTC m=+46.274875614" watchObservedRunningTime="2026-04-17 23:34:15.32772058 +0000 UTC m=+46.276994197" Apr 17 23:34:15.335936 kubelet[2504]: I0417 23:34:15.335878 2504 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:34:15.336256 containerd[1462]: time="2026-04-17T23:34:15.336228291Z" level=info msg="RemoveContainer for \"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f\" returns successfully" Apr 17 23:34:15.338809 kubelet[2504]: I0417 23:34:15.338765 2504 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:34:15.345647 kubelet[2504]: I0417 23:34:15.345578 2504 scope.go:117] "RemoveContainer" containerID="d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3" Apr 17 23:34:15.347918 containerd[1462]: time="2026-04-17T23:34:15.347609234Z" level=info msg="RemoveContainer for \"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3\"" Apr 17 23:34:15.357203 containerd[1462]: time="2026-04-17T23:34:15.357149300Z" level=info msg="RemoveContainer for \"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3\" returns successfully" Apr 17 23:34:15.357633 kubelet[2504]: I0417 23:34:15.357594 2504 scope.go:117] "RemoveContainer" containerID="be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f" Apr 17 23:34:15.369304 containerd[1462]: time="2026-04-17T23:34:15.362604316Z" level=error msg="ContainerStatus for \"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f\": not found" Apr 17 23:34:15.378601 kubelet[2504]: E0417 23:34:15.378552 2504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f\": not found" containerID="be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f" Apr 17 23:34:15.393245 kubelet[2504]: I0417 23:34:15.378607 2504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f"} err="failed to get container status \"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f\": rpc error: code = NotFound desc = an error occurred when try to find container \"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f\": not found" Apr 17 23:34:15.393245 kubelet[2504]: I0417 23:34:15.393242 2504 scope.go:117] "RemoveContainer" containerID="d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3" Apr 17 23:34:15.394218 containerd[1462]: time="2026-04-17T23:34:15.393971979Z" level=error msg="ContainerStatus for \"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3\": not found" Apr 17 23:34:15.395318 kubelet[2504]: E0417 23:34:15.395295 2504 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3\": not found" containerID="d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3" Apr 17 23:34:15.395383 kubelet[2504]: I0417 23:34:15.395327 2504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3"} err="failed to get container status \"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3\": not found" Apr 17 23:34:15.395383 kubelet[2504]: I0417 23:34:15.395342 2504 scope.go:117] "RemoveContainer" containerID="be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f" Apr 17 23:34:15.395559 containerd[1462]: time="2026-04-17T23:34:15.395525677Z" level=error msg="ContainerStatus for \"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f\": not found" Apr 17 23:34:15.395641 kubelet[2504]: I0417 23:34:15.395618 2504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f"} err="failed to get container status \"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f\": rpc error: code = NotFound desc = an error occurred when try to find container \"be8e8ca7a8745c8c02b3852a482efd4ecb91e3ef2b73d4af1826c8e09bbe4b7f\": not found" Apr 17 23:34:15.395662 kubelet[2504]: I0417 23:34:15.395643 2504 scope.go:117] "RemoveContainer" containerID="d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3" Apr 17 23:34:15.395846 containerd[1462]: time="2026-04-17T23:34:15.395811967Z" level=error msg="ContainerStatus for \"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3\": not found" Apr 17 23:34:15.396049 kubelet[2504]: I0417 23:34:15.395947 2504 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3"} err="failed to get container status \"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"d748a8980776a710ff494fc5c44685d274a6c9a364cb5389062ff22538fa88f3\": not found" Apr 17 23:34:15.421515 systemd[1]: Created slice kubepods-besteffort-pod5d35d9f1_d8f4_485c_95c2_86f53afe16b0.slice - libcontainer container kubepods-besteffort-pod5d35d9f1_d8f4_485c_95c2_86f53afe16b0.slice. Apr 17 23:34:15.494458 kubelet[2504]: I0417 23:34:15.494388 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5d35d9f1-d8f4-485c-95c2-86f53afe16b0-nginx-config\") pod \"whisker-5b5b8f6d89-gwtl9\" (UID: \"5d35d9f1-d8f4-485c-95c2-86f53afe16b0\") " pod="calico-system/whisker-5b5b8f6d89-gwtl9" Apr 17 23:34:15.494458 kubelet[2504]: I0417 23:34:15.494436 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5d35d9f1-d8f4-485c-95c2-86f53afe16b0-whisker-backend-key-pair\") pod \"whisker-5b5b8f6d89-gwtl9\" (UID: \"5d35d9f1-d8f4-485c-95c2-86f53afe16b0\") " pod="calico-system/whisker-5b5b8f6d89-gwtl9" Apr 17 23:34:15.494458 kubelet[2504]: I0417 23:34:15.494452 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qb4f9\" (UniqueName: \"kubernetes.io/projected/5d35d9f1-d8f4-485c-95c2-86f53afe16b0-kube-api-access-qb4f9\") pod \"whisker-5b5b8f6d89-gwtl9\" (UID: \"5d35d9f1-d8f4-485c-95c2-86f53afe16b0\") " pod="calico-system/whisker-5b5b8f6d89-gwtl9" Apr 17 23:34:15.494458 kubelet[2504]: I0417 23:34:15.494477 2504 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d35d9f1-d8f4-485c-95c2-86f53afe16b0-whisker-ca-bundle\") pod \"whisker-5b5b8f6d89-gwtl9\" (UID: \"5d35d9f1-d8f4-485c-95c2-86f53afe16b0\") " pod="calico-system/whisker-5b5b8f6d89-gwtl9" Apr 17 23:34:15.725846 containerd[1462]: time="2026-04-17T23:34:15.725626036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b5b8f6d89-gwtl9,Uid:5d35d9f1-d8f4-485c-95c2-86f53afe16b0,Namespace:calico-system,Attempt:0,}" Apr 17 23:34:15.808357 systemd[1]: Started sshd@8-10.0.0.43:22-10.0.0.1:58776.service - OpenSSH per-connection server daemon (10.0.0.1:58776). Apr 17 23:34:15.871327 sshd[5113]: Accepted publickey for core from 10.0.0.1 port 58776 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:34:15.872870 sshd[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:15.877799 systemd-logind[1440]: New session 9 of user core. Apr 17 23:34:15.880549 systemd-networkd[1388]: cali93db61dd579: Link UP Apr 17 23:34:15.881810 systemd-networkd[1388]: cali93db61dd579: Gained carrier Apr 17 23:34:15.883140 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:34:15.893639 containerd[1462]: 2026-04-17 23:34:15.790 [INFO][5097] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5b5b8f6d89--gwtl9-eth0 whisker-5b5b8f6d89- calico-system 5d35d9f1-d8f4-485c-95c2-86f53afe16b0 1092 0 2026-04-17 23:34:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b5b8f6d89 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5b5b8f6d89-gwtl9 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali93db61dd579 [] [] }} ContainerID="49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e" Namespace="calico-system" Pod="whisker-5b5b8f6d89-gwtl9" WorkloadEndpoint="localhost-k8s-whisker--5b5b8f6d89--gwtl9-" Apr 17 23:34:15.893639 containerd[1462]: 2026-04-17 23:34:15.790 [INFO][5097] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e" Namespace="calico-system" Pod="whisker-5b5b8f6d89-gwtl9" WorkloadEndpoint="localhost-k8s-whisker--5b5b8f6d89--gwtl9-eth0" Apr 17 23:34:15.893639 containerd[1462]: 2026-04-17 23:34:15.819 [INFO][5105] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e" HandleID="k8s-pod-network.49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e" Workload="localhost-k8s-whisker--5b5b8f6d89--gwtl9-eth0" Apr 17 23:34:15.893639 containerd[1462]: 2026-04-17 23:34:15.829 [INFO][5105] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e" HandleID="k8s-pod-network.49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e" Workload="localhost-k8s-whisker--5b5b8f6d89--gwtl9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040fa40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5b5b8f6d89-gwtl9", "timestamp":"2026-04-17 23:34:15.819527345 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000378dc0)} Apr 17 23:34:15.893639 containerd[1462]: 2026-04-17 23:34:15.829 [INFO][5105] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:34:15.893639 containerd[1462]: 2026-04-17 23:34:15.829 [INFO][5105] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:34:15.893639 containerd[1462]: 2026-04-17 23:34:15.829 [INFO][5105] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 17 23:34:15.893639 containerd[1462]: 2026-04-17 23:34:15.832 [INFO][5105] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e" host="localhost" Apr 17 23:34:15.893639 containerd[1462]: 2026-04-17 23:34:15.841 [INFO][5105] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 17 23:34:15.893639 containerd[1462]: 2026-04-17 23:34:15.850 [INFO][5105] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 17 23:34:15.893639 containerd[1462]: 2026-04-17 23:34:15.853 [INFO][5105] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 17 23:34:15.893639 containerd[1462]: 2026-04-17 23:34:15.859 [INFO][5105] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 17 23:34:15.893639 containerd[1462]: 2026-04-17 23:34:15.859 [INFO][5105] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e" host="localhost" Apr 17 23:34:15.893639 containerd[1462]: 2026-04-17 23:34:15.863 [INFO][5105] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e Apr 17 23:34:15.893639 containerd[1462]: 2026-04-17 23:34:15.867 [INFO][5105] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e" host="localhost" Apr 17 23:34:15.893639 containerd[1462]: 2026-04-17 23:34:15.875 [INFO][5105] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e" host="localhost" Apr 17 23:34:15.893639 containerd[1462]: 2026-04-17 23:34:15.876 [INFO][5105] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e" host="localhost" Apr 17 23:34:15.893639 containerd[1462]: 2026-04-17 23:34:15.876 [INFO][5105] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:34:15.893639 containerd[1462]: 2026-04-17 23:34:15.876 [INFO][5105] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e" HandleID="k8s-pod-network.49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e" Workload="localhost-k8s-whisker--5b5b8f6d89--gwtl9-eth0" Apr 17 23:34:15.894844 containerd[1462]: 2026-04-17 23:34:15.878 [INFO][5097] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e" Namespace="calico-system" Pod="whisker-5b5b8f6d89-gwtl9" WorkloadEndpoint="localhost-k8s-whisker--5b5b8f6d89--gwtl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b5b8f6d89--gwtl9-eth0", GenerateName:"whisker-5b5b8f6d89-", Namespace:"calico-system", SelfLink:"", UID:"5d35d9f1-d8f4-485c-95c2-86f53afe16b0", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b5b8f6d89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5b5b8f6d89-gwtl9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali93db61dd579", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:34:15.894844 containerd[1462]: 2026-04-17 23:34:15.879 [INFO][5097] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e" Namespace="calico-system" Pod="whisker-5b5b8f6d89-gwtl9" WorkloadEndpoint="localhost-k8s-whisker--5b5b8f6d89--gwtl9-eth0" Apr 17 23:34:15.894844 containerd[1462]: 2026-04-17 23:34:15.879 [INFO][5097] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93db61dd579 ContainerID="49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e" Namespace="calico-system" Pod="whisker-5b5b8f6d89-gwtl9" WorkloadEndpoint="localhost-k8s-whisker--5b5b8f6d89--gwtl9-eth0" Apr 17 23:34:15.894844 containerd[1462]: 2026-04-17 23:34:15.880 [INFO][5097] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e" Namespace="calico-system" Pod="whisker-5b5b8f6d89-gwtl9" WorkloadEndpoint="localhost-k8s-whisker--5b5b8f6d89--gwtl9-eth0" Apr 17 23:34:15.894844 containerd[1462]: 2026-04-17 23:34:15.881 [INFO][5097] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e" Namespace="calico-system" Pod="whisker-5b5b8f6d89-gwtl9" WorkloadEndpoint="localhost-k8s-whisker--5b5b8f6d89--gwtl9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5b5b8f6d89--gwtl9-eth0", GenerateName:"whisker-5b5b8f6d89-", Namespace:"calico-system", SelfLink:"", UID:"5d35d9f1-d8f4-485c-95c2-86f53afe16b0", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b5b8f6d89", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e", Pod:"whisker-5b5b8f6d89-gwtl9", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali93db61dd579", MAC:"56:b3:79:f2:3f:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:34:15.894844 containerd[1462]: 2026-04-17 23:34:15.888 [INFO][5097] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e" Namespace="calico-system" Pod="whisker-5b5b8f6d89-gwtl9" WorkloadEndpoint="localhost-k8s-whisker--5b5b8f6d89--gwtl9-eth0" Apr 17 23:34:15.917607 containerd[1462]: time="2026-04-17T23:34:15.917348612Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:34:15.918301 containerd[1462]: time="2026-04-17T23:34:15.918202024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:34:15.918301 containerd[1462]: time="2026-04-17T23:34:15.918217723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:15.918402 containerd[1462]: time="2026-04-17T23:34:15.918376375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:34:15.942297 systemd[1]: Started cri-containerd-49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e.scope - libcontainer container 49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e. Apr 17 23:34:15.953726 systemd-resolved[1391]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 17 23:34:15.975789 containerd[1462]: time="2026-04-17T23:34:15.975752264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b5b8f6d89-gwtl9,Uid:5d35d9f1-d8f4-485c-95c2-86f53afe16b0,Namespace:calico-system,Attempt:0,} returns sandbox id \"49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e\"" Apr 17 23:34:15.983954 containerd[1462]: time="2026-04-17T23:34:15.983649697Z" level=info msg="CreateContainer within sandbox \"49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:34:16.000722 containerd[1462]: time="2026-04-17T23:34:16.000657560Z" level=info msg="CreateContainer within sandbox \"49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f8409a69ef60d801ab15bce5680a5d00b997dbe5bb05f23ec3637382ae102d57\"" Apr 17 23:34:16.002843 containerd[1462]: time="2026-04-17T23:34:16.001544209Z" level=info msg="StartContainer for \"f8409a69ef60d801ab15bce5680a5d00b997dbe5bb05f23ec3637382ae102d57\"" Apr 17 23:34:16.036250 systemd[1]: Started cri-containerd-f8409a69ef60d801ab15bce5680a5d00b997dbe5bb05f23ec3637382ae102d57.scope - libcontainer container f8409a69ef60d801ab15bce5680a5d00b997dbe5bb05f23ec3637382ae102d57. Apr 17 23:34:16.079759 containerd[1462]: time="2026-04-17T23:34:16.079629590Z" level=info msg="StartContainer for \"f8409a69ef60d801ab15bce5680a5d00b997dbe5bb05f23ec3637382ae102d57\" returns successfully" Apr 17 23:34:16.087267 containerd[1462]: time="2026-04-17T23:34:16.086893054Z" level=info msg="CreateContainer within sandbox \"49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:34:16.100927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3378786009.mount: Deactivated successfully. Apr 17 23:34:16.117745 containerd[1462]: time="2026-04-17T23:34:16.117667583Z" level=info msg="CreateContainer within sandbox \"49e9c2ff1b2139f3e08c99362bf7ccfeb991f3ccb63c3f31b2f37f7cab24e87e\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"98e70952c57bbbb7cb13a621240ab2c359aefb0fc9335cde5fec96b5873667c3\"" Apr 17 23:34:16.118662 containerd[1462]: time="2026-04-17T23:34:16.118635877Z" level=info msg="StartContainer for \"98e70952c57bbbb7cb13a621240ab2c359aefb0fc9335cde5fec96b5873667c3\"" Apr 17 23:34:16.151146 systemd[1]: Started cri-containerd-98e70952c57bbbb7cb13a621240ab2c359aefb0fc9335cde5fec96b5873667c3.scope - libcontainer container 98e70952c57bbbb7cb13a621240ab2c359aefb0fc9335cde5fec96b5873667c3. Apr 17 23:34:16.162578 sshd[5113]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:16.165243 systemd[1]: sshd@8-10.0.0.43:22-10.0.0.1:58776.service: Deactivated successfully. Apr 17 23:34:16.166710 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:34:16.168710 systemd-logind[1440]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:34:16.169601 systemd-logind[1440]: Removed session 9. Apr 17 23:34:16.190468 containerd[1462]: time="2026-04-17T23:34:16.190420275Z" level=info msg="StartContainer for \"98e70952c57bbbb7cb13a621240ab2c359aefb0fc9335cde5fec96b5873667c3\" returns successfully" Apr 17 23:34:16.327641 kubelet[2504]: I0417 23:34:16.327574 2504 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5b5b8f6d89-gwtl9" podStartSLOduration=1.327554515 podStartE2EDuration="1.327554515s" podCreationTimestamp="2026-04-17 23:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:34:16.327302591 +0000 UTC m=+47.276576200" watchObservedRunningTime="2026-04-17 23:34:16.327554515 +0000 UTC m=+47.276828127" Apr 17 23:34:17.137093 kubelet[2504]: I0417 23:34:17.137033 2504 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd369009-82a8-4ff9-89b2-990a5a426bba" path="/var/lib/kubelet/pods/bd369009-82a8-4ff9-89b2-990a5a426bba/volumes" Apr 17 23:34:17.880359 systemd-networkd[1388]: cali93db61dd579: Gained IPv6LL Apr 17 23:34:21.176829 systemd[1]: Started sshd@9-10.0.0.43:22-10.0.0.1:49370.service - OpenSSH per-connection server daemon (10.0.0.1:49370). Apr 17 23:34:21.208444 sshd[5284]: Accepted publickey for core from 10.0.0.1 port 49370 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:34:21.209774 sshd[5284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:21.213631 systemd-logind[1440]: New session 10 of user core. Apr 17 23:34:21.219180 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:34:21.352127 sshd[5284]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:21.355736 systemd[1]: sshd@9-10.0.0.43:22-10.0.0.1:49370.service: Deactivated successfully. Apr 17 23:34:21.357348 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:34:21.358083 systemd-logind[1440]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:34:21.359128 systemd-logind[1440]: Removed session 10. Apr 17 23:34:26.369447 systemd[1]: Started sshd@10-10.0.0.43:22-10.0.0.1:49396.service - OpenSSH per-connection server daemon (10.0.0.1:49396). Apr 17 23:34:26.408417 sshd[5335]: Accepted publickey for core from 10.0.0.1 port 49396 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:34:26.444874 sshd[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:26.461075 systemd-logind[1440]: New session 11 of user core. Apr 17 23:34:26.468181 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:34:26.598502 sshd[5335]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:26.611952 systemd[1]: sshd@10-10.0.0.43:22-10.0.0.1:49396.service: Deactivated successfully. Apr 17 23:34:26.613974 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:34:26.615711 systemd-logind[1440]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:34:26.621465 systemd[1]: Started sshd@11-10.0.0.43:22-10.0.0.1:49402.service - OpenSSH per-connection server daemon (10.0.0.1:49402). Apr 17 23:34:26.623598 systemd-logind[1440]: Removed session 11. Apr 17 23:34:26.650483 sshd[5351]: Accepted publickey for core from 10.0.0.1 port 49402 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:34:26.652626 sshd[5351]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:26.659279 systemd-logind[1440]: New session 12 of user core. Apr 17 23:34:26.671382 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:34:26.851530 sshd[5351]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:26.861751 systemd[1]: sshd@11-10.0.0.43:22-10.0.0.1:49402.service: Deactivated successfully. Apr 17 23:34:26.866269 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:34:26.868099 systemd-logind[1440]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:34:26.876605 systemd[1]: Started sshd@12-10.0.0.43:22-10.0.0.1:49414.service - OpenSSH per-connection server daemon (10.0.0.1:49414). Apr 17 23:34:26.879819 systemd-logind[1440]: Removed session 12. Apr 17 23:34:26.928553 sshd[5364]: Accepted publickey for core from 10.0.0.1 port 49414 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:34:26.929926 sshd[5364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:26.934204 systemd-logind[1440]: New session 13 of user core. Apr 17 23:34:26.944334 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:34:27.067094 sshd[5364]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:27.069959 systemd[1]: sshd@12-10.0.0.43:22-10.0.0.1:49414.service: Deactivated successfully. Apr 17 23:34:27.072342 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:34:27.073046 systemd-logind[1440]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:34:27.074553 systemd-logind[1440]: Removed session 13. Apr 17 23:34:29.125791 containerd[1462]: time="2026-04-17T23:34:29.125731672Z" level=info msg="StopPodSandbox for \"273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b\"" Apr 17 23:34:29.229968 containerd[1462]: 2026-04-17 23:34:29.164 [WARNING][5408] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" WorkloadEndpoint="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" Apr 17 23:34:29.229968 containerd[1462]: 2026-04-17 23:34:29.164 [INFO][5408] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Apr 17 23:34:29.229968 containerd[1462]: 2026-04-17 23:34:29.164 [INFO][5408] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" iface="eth0" netns="" Apr 17 23:34:29.229968 containerd[1462]: 2026-04-17 23:34:29.164 [INFO][5408] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Apr 17 23:34:29.229968 containerd[1462]: 2026-04-17 23:34:29.164 [INFO][5408] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Apr 17 23:34:29.229968 containerd[1462]: 2026-04-17 23:34:29.206 [INFO][5419] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" HandleID="k8s-pod-network.273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Workload="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" Apr 17 23:34:29.229968 containerd[1462]: 2026-04-17 23:34:29.207 [INFO][5419] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:34:29.229968 containerd[1462]: 2026-04-17 23:34:29.207 [INFO][5419] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:34:29.229968 containerd[1462]: 2026-04-17 23:34:29.218 [WARNING][5419] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" HandleID="k8s-pod-network.273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Workload="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" Apr 17 23:34:29.229968 containerd[1462]: 2026-04-17 23:34:29.220 [INFO][5419] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" HandleID="k8s-pod-network.273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Workload="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" Apr 17 23:34:29.229968 containerd[1462]: 2026-04-17 23:34:29.224 [INFO][5419] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:34:29.229968 containerd[1462]: 2026-04-17 23:34:29.226 [INFO][5408] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Apr 17 23:34:29.230709 containerd[1462]: time="2026-04-17T23:34:29.230026981Z" level=info msg="TearDown network for sandbox \"273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b\" successfully" Apr 17 23:34:29.230709 containerd[1462]: time="2026-04-17T23:34:29.230055757Z" level=info msg="StopPodSandbox for \"273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b\" returns successfully" Apr 17 23:34:29.230827 containerd[1462]: time="2026-04-17T23:34:29.230807595Z" level=info msg="RemovePodSandbox for \"273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b\"" Apr 17 23:34:29.232650 containerd[1462]: time="2026-04-17T23:34:29.232527620Z" level=info msg="Forcibly stopping sandbox \"273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b\"" Apr 17 23:34:29.305771 containerd[1462]: 2026-04-17 23:34:29.267 [WARNING][5437] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" WorkloadEndpoint="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" Apr 17 23:34:29.305771 containerd[1462]: 2026-04-17 23:34:29.267 [INFO][5437] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Apr 17 23:34:29.305771 containerd[1462]: 2026-04-17 23:34:29.267 [INFO][5437] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" iface="eth0" netns="" Apr 17 23:34:29.305771 containerd[1462]: 2026-04-17 23:34:29.267 [INFO][5437] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Apr 17 23:34:29.305771 containerd[1462]: 2026-04-17 23:34:29.267 [INFO][5437] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Apr 17 23:34:29.305771 containerd[1462]: 2026-04-17 23:34:29.294 [INFO][5446] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" HandleID="k8s-pod-network.273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Workload="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" Apr 17 23:34:29.305771 containerd[1462]: 2026-04-17 23:34:29.294 [INFO][5446] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:34:29.305771 containerd[1462]: 2026-04-17 23:34:29.294 [INFO][5446] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:34:29.305771 containerd[1462]: 2026-04-17 23:34:29.300 [WARNING][5446] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" HandleID="k8s-pod-network.273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Workload="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" Apr 17 23:34:29.305771 containerd[1462]: 2026-04-17 23:34:29.300 [INFO][5446] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" HandleID="k8s-pod-network.273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Workload="localhost-k8s-whisker--65dd96956d--nxrcj-eth0" Apr 17 23:34:29.305771 containerd[1462]: 2026-04-17 23:34:29.302 [INFO][5446] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:34:29.305771 containerd[1462]: 2026-04-17 23:34:29.303 [INFO][5437] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b" Apr 17 23:34:29.306089 containerd[1462]: time="2026-04-17T23:34:29.305895478Z" level=info msg="TearDown network for sandbox \"273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b\" successfully" Apr 17 23:34:29.316507 containerd[1462]: time="2026-04-17T23:34:29.316390500Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:34:29.316854 containerd[1462]: time="2026-04-17T23:34:29.316551560Z" level=info msg="RemovePodSandbox \"273b381743514cde0e2d3ef6c6b81db21e322584e98c3ed5f53323c602713c9b\" returns successfully" Apr 17 23:34:32.082568 systemd[1]: Started sshd@13-10.0.0.43:22-10.0.0.1:46582.service - OpenSSH per-connection server daemon (10.0.0.1:46582). Apr 17 23:34:32.127489 sshd[5454]: Accepted publickey for core from 10.0.0.1 port 46582 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:34:32.128910 sshd[5454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:32.132431 systemd-logind[1440]: New session 14 of user core. Apr 17 23:34:32.143281 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:34:32.268286 sshd[5454]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:32.280573 systemd[1]: sshd@13-10.0.0.43:22-10.0.0.1:46582.service: Deactivated successfully. Apr 17 23:34:32.282064 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:34:32.283228 systemd-logind[1440]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:34:32.289734 systemd[1]: Started sshd@14-10.0.0.43:22-10.0.0.1:46584.service - OpenSSH per-connection server daemon (10.0.0.1:46584). Apr 17 23:34:32.290891 systemd-logind[1440]: Removed session 14. Apr 17 23:34:32.317030 sshd[5469]: Accepted publickey for core from 10.0.0.1 port 46584 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:34:32.318434 sshd[5469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:32.323641 systemd-logind[1440]: New session 15 of user core. Apr 17 23:34:32.337280 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:34:32.545924 sshd[5469]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:32.557897 systemd[1]: sshd@14-10.0.0.43:22-10.0.0.1:46584.service: Deactivated successfully. Apr 17 23:34:32.559734 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:34:32.560883 systemd-logind[1440]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:34:32.567618 systemd[1]: Started sshd@15-10.0.0.43:22-10.0.0.1:46586.service - OpenSSH per-connection server daemon (10.0.0.1:46586). Apr 17 23:34:32.568547 systemd-logind[1440]: Removed session 15. Apr 17 23:34:32.603319 sshd[5481]: Accepted publickey for core from 10.0.0.1 port 46586 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:34:32.604731 sshd[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:32.608251 systemd-logind[1440]: New session 16 of user core. Apr 17 23:34:32.617454 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:34:33.166735 sshd[5481]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:33.185499 systemd[1]: Started sshd@16-10.0.0.43:22-10.0.0.1:46598.service - OpenSSH per-connection server daemon (10.0.0.1:46598). Apr 17 23:34:33.188555 systemd[1]: sshd@15-10.0.0.43:22-10.0.0.1:46586.service: Deactivated successfully. Apr 17 23:34:33.196751 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:34:33.201582 systemd-logind[1440]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:34:33.207094 systemd-logind[1440]: Removed session 16. Apr 17 23:34:33.245086 sshd[5505]: Accepted publickey for core from 10.0.0.1 port 46598 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:34:33.246743 sshd[5505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:33.253106 systemd-logind[1440]: New session 17 of user core. Apr 17 23:34:33.263313 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:34:33.607688 sshd[5505]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:33.618867 systemd[1]: sshd@16-10.0.0.43:22-10.0.0.1:46598.service: Deactivated successfully. Apr 17 23:34:33.620469 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:34:33.621919 systemd-logind[1440]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:34:33.624134 systemd[1]: Started sshd@17-10.0.0.43:22-10.0.0.1:46602.service - OpenSSH per-connection server daemon (10.0.0.1:46602). Apr 17 23:34:33.627823 systemd-logind[1440]: Removed session 17. Apr 17 23:34:33.672894 sshd[5521]: Accepted publickey for core from 10.0.0.1 port 46602 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:34:33.674238 sshd[5521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:33.679214 systemd-logind[1440]: New session 18 of user core. Apr 17 23:34:33.690306 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:34:33.806850 sshd[5521]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:33.810631 systemd[1]: sshd@17-10.0.0.43:22-10.0.0.1:46602.service: Deactivated successfully. Apr 17 23:34:33.811941 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:34:33.812417 systemd-logind[1440]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:34:33.813362 systemd-logind[1440]: Removed session 18. Apr 17 23:34:35.413760 kubelet[2504]: I0417 23:34:35.413518 2504 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:34:38.816233 systemd[1]: Started sshd@18-10.0.0.43:22-10.0.0.1:46612.service - OpenSSH per-connection server daemon (10.0.0.1:46612). Apr 17 23:34:38.851645 sshd[5570]: Accepted publickey for core from 10.0.0.1 port 46612 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:34:38.852776 sshd[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:38.859224 systemd-logind[1440]: New session 19 of user core. Apr 17 23:34:38.863185 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:34:38.985626 sshd[5570]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:38.989316 systemd[1]: sshd@18-10.0.0.43:22-10.0.0.1:46612.service: Deactivated successfully. Apr 17 23:34:38.995420 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:34:38.999658 systemd-logind[1440]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:34:39.002075 systemd-logind[1440]: Removed session 19. Apr 17 23:34:39.312028 systemd[1]: run-containerd-runc-k8s.io-de1d02fe126799e4d3557b3143aef450bfd70be3180f3a3707caffecbe528aa0-runc.EImLFy.mount: Deactivated successfully. Apr 17 23:34:44.001637 systemd[1]: Started sshd@19-10.0.0.43:22-10.0.0.1:54914.service - OpenSSH per-connection server daemon (10.0.0.1:54914). Apr 17 23:34:44.057074 sshd[5615]: Accepted publickey for core from 10.0.0.1 port 54914 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:34:44.058375 sshd[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:44.064586 systemd-logind[1440]: New session 20 of user core. Apr 17 23:34:44.073286 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:34:44.235226 sshd[5615]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:44.238229 systemd[1]: sshd@19-10.0.0.43:22-10.0.0.1:54914.service: Deactivated successfully. Apr 17 23:34:44.239564 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:34:44.240246 systemd-logind[1440]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:34:44.241251 systemd-logind[1440]: Removed session 20. Apr 17 23:34:49.245681 systemd[1]: Started sshd@20-10.0.0.43:22-10.0.0.1:54952.service - OpenSSH per-connection server daemon (10.0.0.1:54952). Apr 17 23:34:49.276119 sshd[5654]: Accepted publickey for core from 10.0.0.1 port 54952 ssh2: RSA SHA256:E6pky6dhKlUTTc8PKl7cFvWht1oyD+LPE0dplBcc100 Apr 17 23:34:49.277230 sshd[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:34:49.280909 systemd-logind[1440]: New session 21 of user core. Apr 17 23:34:49.286222 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:34:49.432717 sshd[5654]: pam_unix(sshd:session): session closed for user core Apr 17 23:34:49.435469 systemd[1]: sshd@20-10.0.0.43:22-10.0.0.1:54952.service: Deactivated successfully. Apr 17 23:34:49.436916 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:34:49.437469 systemd-logind[1440]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:34:49.438384 systemd-logind[1440]: Removed session 21. Apr 17 23:34:50.137924 kubelet[2504]: E0417 23:34:50.137841 2504 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"