Apr 14 01:10:16.876223 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 14 01:10:16.876242 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 01:10:16.876252 kernel: BIOS-provided physical RAM map: Apr 14 01:10:16.876257 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 14 01:10:16.876262 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 14 01:10:16.876266 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 14 01:10:16.876271 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Apr 14 01:10:16.876275 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Apr 14 01:10:16.876279 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 14 01:10:16.876285 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 14 01:10:16.876289 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 14 01:10:16.876293 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 14 01:10:16.876297 kernel: NX (Execute Disable) protection: active Apr 14 01:10:16.876301 kernel: APIC: Static calls initialized Apr 14 01:10:16.876307 kernel: SMBIOS 2.8 present. Apr 14 01:10:16.876313 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Apr 14 01:10:16.876349 kernel: Hypervisor detected: KVM Apr 14 01:10:16.876354 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 14 01:10:16.876359 kernel: kvm-clock: using sched offset of 3766189519 cycles Apr 14 01:10:16.876365 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 14 01:10:16.876369 kernel: tsc: Detected 2793.438 MHz processor Apr 14 01:10:16.876374 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 14 01:10:16.876379 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 14 01:10:16.876384 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x10000000000 Apr 14 01:10:16.876390 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 14 01:10:16.876395 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 14 01:10:16.876400 kernel: Using GB pages for direct mapping Apr 14 01:10:16.876419 kernel: ACPI: Early table checksum verification disabled Apr 14 01:10:16.876424 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Apr 14 01:10:16.876429 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:10:16.876434 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:10:16.876438 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:10:16.876443 kernel: ACPI: FACS 0x000000009CFE0000 000040 Apr 14 01:10:16.876449 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:10:16.876454 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:10:16.876458 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:10:16.876463 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 14 01:10:16.876468 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Apr 14 01:10:16.876472 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Apr 14 01:10:16.876477 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Apr 14 01:10:16.876484 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Apr 14 01:10:16.876491 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Apr 14 01:10:16.876495 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Apr 14 01:10:16.876501 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Apr 14 01:10:16.876505 kernel: No NUMA configuration found Apr 14 01:10:16.876510 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Apr 14 01:10:16.876515 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Apr 14 01:10:16.876525 kernel: Zone ranges: Apr 14 01:10:16.876533 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 14 01:10:16.876542 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Apr 14 01:10:16.876550 kernel: Normal empty Apr 14 01:10:16.876559 kernel: Movable zone start for each node Apr 14 01:10:16.876568 kernel: Early memory node ranges Apr 14 01:10:16.876577 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 14 01:10:16.876582 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Apr 14 01:10:16.876587 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Apr 14 01:10:16.876592 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 14 01:10:16.876599 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 14 01:10:16.876604 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Apr 14 01:10:16.876609 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 14 01:10:16.876614 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 14 01:10:16.876619 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 14 01:10:16.876623 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 14 01:10:16.876628 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 14 01:10:16.876633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 14 01:10:16.876638 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 14 01:10:16.876645 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 14 01:10:16.876649 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 14 01:10:16.876654 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 14 01:10:16.876659 kernel: TSC deadline timer available Apr 14 01:10:16.876664 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Apr 14 01:10:16.876669 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 14 01:10:16.876674 kernel: kvm-guest: KVM setup pv remote TLB flush Apr 14 01:10:16.876679 kernel: kvm-guest: setup PV sched yield Apr 14 01:10:16.876684 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 14 01:10:16.876690 kernel: Booting paravirtualized kernel on KVM Apr 14 01:10:16.876695 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 14 01:10:16.876700 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Apr 14 01:10:16.876705 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u524288 Apr 14 01:10:16.876710 kernel: pcpu-alloc: s196328 r8192 d28952 u524288 alloc=1*2097152 Apr 14 01:10:16.876715 kernel: pcpu-alloc: [0] 0 1 2 3 Apr 14 01:10:16.876720 kernel: kvm-guest: PV spinlocks enabled Apr 14 01:10:16.876725 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 14 01:10:16.876731 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 01:10:16.876737 kernel: random: crng init done Apr 14 01:10:16.876742 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 14 01:10:16.876747 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 14 01:10:16.876752 kernel: Fallback order for Node 0: 0 Apr 14 01:10:16.876757 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Apr 14 01:10:16.876762 kernel: Policy zone: DMA32 Apr 14 01:10:16.876767 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 14 01:10:16.876772 kernel: Memory: 2433648K/2571752K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 137900K reserved, 0K cma-reserved) Apr 14 01:10:16.876778 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Apr 14 01:10:16.876783 kernel: ftrace: allocating 37996 entries in 149 pages Apr 14 01:10:16.876788 kernel: ftrace: allocated 149 pages with 4 groups Apr 14 01:10:16.876793 kernel: Dynamic Preempt: voluntary Apr 14 01:10:16.876798 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 14 01:10:16.876804 kernel: rcu: RCU event tracing is enabled. Apr 14 01:10:16.876809 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Apr 14 01:10:16.876814 kernel: Trampoline variant of Tasks RCU enabled. Apr 14 01:10:16.876819 kernel: Rude variant of Tasks RCU enabled. Apr 14 01:10:16.876825 kernel: Tracing variant of Tasks RCU enabled. Apr 14 01:10:16.876830 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 14 01:10:16.876835 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Apr 14 01:10:16.876840 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Apr 14 01:10:16.876845 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 14 01:10:16.876850 kernel: Console: colour VGA+ 80x25 Apr 14 01:10:16.876854 kernel: printk: console [ttyS0] enabled Apr 14 01:10:16.876859 kernel: ACPI: Core revision 20230628 Apr 14 01:10:16.876864 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Apr 14 01:10:16.876869 kernel: APIC: Switch to symmetric I/O mode setup Apr 14 01:10:16.876876 kernel: x2apic enabled Apr 14 01:10:16.876880 kernel: APIC: Switched APIC routing to: physical x2apic Apr 14 01:10:16.876885 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Apr 14 01:10:16.876890 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Apr 14 01:10:16.876895 kernel: kvm-guest: setup PV IPIs Apr 14 01:10:16.876902 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Apr 14 01:10:16.876911 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 01:10:16.876929 kernel: Calibrating delay loop (skipped) preset value.. 5586.87 BogoMIPS (lpj=2793438) Apr 14 01:10:16.876939 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 14 01:10:16.876948 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 14 01:10:16.876953 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 14 01:10:16.876960 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 14 01:10:16.876965 kernel: Spectre V2 : Mitigation: Retpolines Apr 14 01:10:16.876971 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 14 01:10:16.876976 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 14 01:10:16.876982 kernel: RETBleed: Vulnerable Apr 14 01:10:16.876989 kernel: Speculative Store Bypass: Vulnerable Apr 14 01:10:16.876994 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 14 01:10:16.877000 kernel: GDS: Unknown: Dependent on hypervisor status Apr 14 01:10:16.877005 kernel: active return thunk: its_return_thunk Apr 14 01:10:16.877010 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 14 01:10:16.877016 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 14 01:10:16.877021 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 14 01:10:16.877026 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 14 01:10:16.877032 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 14 01:10:16.877039 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 14 01:10:16.877044 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 14 01:10:16.877050 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 14 01:10:16.877055 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Apr 14 01:10:16.877060 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Apr 14 01:10:16.877066 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Apr 14 01:10:16.877071 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Apr 14 01:10:16.877076 kernel: Freeing SMP alternatives memory: 32K Apr 14 01:10:16.877082 kernel: pid_max: default: 32768 minimum: 301 Apr 14 01:10:16.877089 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 14 01:10:16.877094 kernel: landlock: Up and running. Apr 14 01:10:16.877100 kernel: SELinux: Initializing. Apr 14 01:10:16.877105 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 01:10:16.877110 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Apr 14 01:10:16.877116 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8370C CPU @ 2.80GHz (family: 0x6, model: 0x6a, stepping: 0x6) Apr 14 01:10:16.877121 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 01:10:16.877127 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 01:10:16.877132 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Apr 14 01:10:16.877139 kernel: Performance Events: unsupported p6 CPU model 106 no PMU driver, software events only. Apr 14 01:10:16.877145 kernel: signal: max sigframe size: 3632 Apr 14 01:10:16.877150 kernel: rcu: Hierarchical SRCU implementation. Apr 14 01:10:16.877156 kernel: rcu: Max phase no-delay instances is 400. Apr 14 01:10:16.877161 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 14 01:10:16.877167 kernel: smp: Bringing up secondary CPUs ... Apr 14 01:10:16.877172 kernel: smpboot: x86: Booting SMP configuration: Apr 14 01:10:16.877178 kernel: .... node #0, CPUs: #1 #2 #3 Apr 14 01:10:16.877183 kernel: smp: Brought up 1 node, 4 CPUs Apr 14 01:10:16.877190 kernel: smpboot: Max logical packages: 1 Apr 14 01:10:16.877196 kernel: smpboot: Total of 4 processors activated (22347.50 BogoMIPS) Apr 14 01:10:16.877201 kernel: devtmpfs: initialized Apr 14 01:10:16.877206 kernel: x86/mm: Memory block size: 128MB Apr 14 01:10:16.877212 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 14 01:10:16.877217 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Apr 14 01:10:16.877223 kernel: pinctrl core: initialized pinctrl subsystem Apr 14 01:10:16.877228 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 14 01:10:16.877233 kernel: audit: initializing netlink subsys (disabled) Apr 14 01:10:16.877240 kernel: audit: type=2000 audit(1776129015.931:1): state=initialized audit_enabled=0 res=1 Apr 14 01:10:16.877246 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 14 01:10:16.877251 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 14 01:10:16.877256 kernel: cpuidle: using governor menu Apr 14 01:10:16.877262 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 14 01:10:16.877267 kernel: dca service started, version 1.12.1 Apr 14 01:10:16.877272 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 14 01:10:16.877278 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 14 01:10:16.877283 kernel: PCI: Using configuration type 1 for base access Apr 14 01:10:16.877290 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 14 01:10:16.877296 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 14 01:10:16.877301 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 14 01:10:16.877306 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 14 01:10:16.877312 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 14 01:10:16.877344 kernel: ACPI: Added _OSI(Module Device) Apr 14 01:10:16.877349 kernel: ACPI: Added _OSI(Processor Device) Apr 14 01:10:16.877355 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 14 01:10:16.877360 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 14 01:10:16.877368 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 14 01:10:16.877373 kernel: ACPI: Interpreter enabled Apr 14 01:10:16.877379 kernel: ACPI: PM: (supports S0 S3 S5) Apr 14 01:10:16.877385 kernel: ACPI: Using IOAPIC for interrupt routing Apr 14 01:10:16.877390 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 14 01:10:16.877396 kernel: PCI: Using E820 reservations for host bridge windows Apr 14 01:10:16.877401 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 14 01:10:16.877421 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 14 01:10:16.877532 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 14 01:10:16.877625 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Apr 14 01:10:16.877683 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Apr 14 01:10:16.877691 kernel: PCI host bridge to bus 0000:00 Apr 14 01:10:16.877749 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 14 01:10:16.877800 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 14 01:10:16.877849 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 14 01:10:16.877901 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Apr 14 01:10:16.877966 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 14 01:10:16.878024 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Apr 14 01:10:16.878075 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 14 01:10:16.878146 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 14 01:10:16.878210 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Apr 14 01:10:16.878269 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Apr 14 01:10:16.878358 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Apr 14 01:10:16.878437 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Apr 14 01:10:16.878495 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 14 01:10:16.878556 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Apr 14 01:10:16.878640 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 14 01:10:16.878698 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Apr 14 01:10:16.878758 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Apr 14 01:10:16.878821 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Apr 14 01:10:16.878878 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Apr 14 01:10:16.878933 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Apr 14 01:10:16.879008 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Apr 14 01:10:16.879071 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 14 01:10:16.879127 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Apr 14 01:10:16.879185 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Apr 14 01:10:16.879265 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Apr 14 01:10:16.879351 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Apr 14 01:10:16.879434 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 14 01:10:16.879491 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 14 01:10:16.879554 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 14 01:10:16.879613 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Apr 14 01:10:16.879695 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Apr 14 01:10:16.879757 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 14 01:10:16.879812 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 14 01:10:16.879820 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 14 01:10:16.879825 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 14 01:10:16.879831 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 14 01:10:16.879837 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 14 01:10:16.879845 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 14 01:10:16.879850 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 14 01:10:16.879856 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 14 01:10:16.879861 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 14 01:10:16.879867 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 14 01:10:16.879872 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 14 01:10:16.879877 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 14 01:10:16.879883 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 14 01:10:16.879888 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 14 01:10:16.879896 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 14 01:10:16.879901 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 14 01:10:16.879907 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 14 01:10:16.879912 kernel: iommu: Default domain type: Translated Apr 14 01:10:16.879918 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 14 01:10:16.879923 kernel: PCI: Using ACPI for IRQ routing Apr 14 01:10:16.879929 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 14 01:10:16.879934 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 14 01:10:16.879940 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Apr 14 01:10:16.880005 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 14 01:10:16.880073 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 14 01:10:16.880149 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 14 01:10:16.880169 kernel: vgaarb: loaded Apr 14 01:10:16.880176 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Apr 14 01:10:16.880182 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Apr 14 01:10:16.880199 kernel: clocksource: Switched to clocksource kvm-clock Apr 14 01:10:16.880215 kernel: VFS: Disk quotas dquot_6.6.0 Apr 14 01:10:16.880231 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 14 01:10:16.880260 kernel: pnp: PnP ACPI init Apr 14 01:10:16.880431 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 14 01:10:16.880441 kernel: pnp: PnP ACPI: found 6 devices Apr 14 01:10:16.880446 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 14 01:10:16.880452 kernel: NET: Registered PF_INET protocol family Apr 14 01:10:16.880458 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 14 01:10:16.880463 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Apr 14 01:10:16.880469 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 14 01:10:16.880477 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 14 01:10:16.880483 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Apr 14 01:10:16.880488 kernel: TCP: Hash tables configured (established 32768 bind 32768) Apr 14 01:10:16.880493 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 01:10:16.880499 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Apr 14 01:10:16.880504 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 14 01:10:16.880510 kernel: NET: Registered PF_XDP protocol family Apr 14 01:10:16.880564 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 14 01:10:16.880614 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 14 01:10:16.880686 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 14 01:10:16.880738 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Apr 14 01:10:16.880787 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 14 01:10:16.880837 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Apr 14 01:10:16.880844 kernel: PCI: CLS 0 bytes, default 64 Apr 14 01:10:16.880850 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 14 01:10:16.880856 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x284409db922, max_idle_ns: 440795228871 ns Apr 14 01:10:16.880861 kernel: Initialise system trusted keyrings Apr 14 01:10:16.880869 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Apr 14 01:10:16.880875 kernel: Key type asymmetric registered Apr 14 01:10:16.880880 kernel: Asymmetric key parser 'x509' registered Apr 14 01:10:16.880886 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 14 01:10:16.880892 kernel: io scheduler mq-deadline registered Apr 14 01:10:16.880897 kernel: io scheduler kyber registered Apr 14 01:10:16.880903 kernel: io scheduler bfq registered Apr 14 01:10:16.880908 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 14 01:10:16.880914 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 14 01:10:16.880922 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 14 01:10:16.880928 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 14 01:10:16.880933 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 14 01:10:16.880939 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 14 01:10:16.880944 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 14 01:10:16.880950 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 14 01:10:16.880955 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 14 01:10:16.881017 kernel: rtc_cmos 00:04: RTC can wake from S4 Apr 14 01:10:16.881032 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 14 01:10:16.881100 kernel: rtc_cmos 00:04: registered as rtc0 Apr 14 01:10:16.881151 kernel: rtc_cmos 00:04: setting system clock to 2026-04-14T01:10:16 UTC (1776129016) Apr 14 01:10:16.881202 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Apr 14 01:10:16.881209 kernel: intel_pstate: CPU model not supported Apr 14 01:10:16.881215 kernel: NET: Registered PF_INET6 protocol family Apr 14 01:10:16.881220 kernel: Segment Routing with IPv6 Apr 14 01:10:16.881226 kernel: In-situ OAM (IOAM) with IPv6 Apr 14 01:10:16.881231 kernel: NET: Registered PF_PACKET protocol family Apr 14 01:10:16.881256 kernel: Key type dns_resolver registered Apr 14 01:10:16.881262 kernel: IPI shorthand broadcast: enabled Apr 14 01:10:16.881268 kernel: sched_clock: Marking stable (841131343, 198910307)->(1106787794, -66746144) Apr 14 01:10:16.881273 kernel: registered taskstats version 1 Apr 14 01:10:16.881280 kernel: Loading compiled-in X.509 certificates Apr 14 01:10:16.881285 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 14 01:10:16.881291 kernel: Key type .fscrypt registered Apr 14 01:10:16.881296 kernel: Key type fscrypt-provisioning registered Apr 14 01:10:16.881302 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 14 01:10:16.881309 kernel: ima: Allocated hash algorithm: sha1 Apr 14 01:10:16.881314 kernel: ima: No architecture policies found Apr 14 01:10:16.881344 kernel: clk: Disabling unused clocks Apr 14 01:10:16.881349 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 14 01:10:16.881355 kernel: Write protecting the kernel read-only data: 36864k Apr 14 01:10:16.881361 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 14 01:10:16.881366 kernel: Run /init as init process Apr 14 01:10:16.881372 kernel: with arguments: Apr 14 01:10:16.881377 kernel: /init Apr 14 01:10:16.881385 kernel: with environment: Apr 14 01:10:16.881390 kernel: HOME=/ Apr 14 01:10:16.881396 kernel: TERM=linux Apr 14 01:10:16.881403 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 01:10:16.881425 systemd[1]: Detected virtualization kvm. Apr 14 01:10:16.881432 systemd[1]: Detected architecture x86-64. Apr 14 01:10:16.881437 systemd[1]: Running in initrd. Apr 14 01:10:16.881443 systemd[1]: No hostname configured, using default hostname. Apr 14 01:10:16.881451 systemd[1]: Hostname set to . Apr 14 01:10:16.881457 systemd[1]: Initializing machine ID from VM UUID. Apr 14 01:10:16.881463 systemd[1]: Queued start job for default target initrd.target. Apr 14 01:10:16.881478 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 01:10:16.881484 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 01:10:16.881500 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 14 01:10:16.881506 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 01:10:16.881512 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 14 01:10:16.881520 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 14 01:10:16.881537 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 14 01:10:16.881543 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 14 01:10:16.881549 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 01:10:16.881572 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 01:10:16.881587 systemd[1]: Reached target paths.target - Path Units. Apr 14 01:10:16.881593 systemd[1]: Reached target slices.target - Slice Units. Apr 14 01:10:16.881599 systemd[1]: Reached target swap.target - Swaps. Apr 14 01:10:16.881605 systemd[1]: Reached target timers.target - Timer Units. Apr 14 01:10:16.881611 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 01:10:16.881618 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 01:10:16.881624 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 14 01:10:16.881630 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 14 01:10:16.881638 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 01:10:16.881643 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 01:10:16.881650 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 01:10:16.881656 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 01:10:16.881668 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 14 01:10:16.881678 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 01:10:16.881688 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 14 01:10:16.881699 systemd[1]: Starting systemd-fsck-usr.service... Apr 14 01:10:16.881709 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 01:10:16.881717 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 01:10:16.881723 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 01:10:16.881729 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 14 01:10:16.881751 systemd-journald[194]: Collecting audit messages is disabled. Apr 14 01:10:16.881768 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 01:10:16.881775 systemd[1]: Finished systemd-fsck-usr.service. Apr 14 01:10:16.881785 systemd-journald[194]: Journal started Apr 14 01:10:16.881800 systemd-journald[194]: Runtime Journal (/run/log/journal/43d8bc9104cd4b9ca42a13d3d48a6513) is 6.0M, max 48.4M, 42.3M free. Apr 14 01:10:16.886176 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 01:10:16.886715 systemd-modules-load[195]: Inserted module 'overlay' Apr 14 01:10:16.892551 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 01:10:16.991366 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 14 01:10:16.991396 kernel: Bridge firewalling registered Apr 14 01:10:16.912490 systemd-modules-load[195]: Inserted module 'br_netfilter' Apr 14 01:10:17.011364 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 01:10:17.011752 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 01:10:17.017922 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 01:10:17.021391 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 01:10:17.028909 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 01:10:17.029973 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 01:10:17.034045 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 01:10:17.046273 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 01:10:17.049069 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 01:10:17.050108 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 01:10:17.057601 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 01:10:17.063239 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 01:10:17.068985 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 14 01:10:17.084913 systemd-resolved[228]: Positive Trust Anchors: Apr 14 01:10:17.084991 systemd-resolved[228]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 01:10:17.089222 dracut-cmdline[233]: dracut-dracut-053 Apr 14 01:10:17.085017 systemd-resolved[228]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 01:10:17.087111 systemd-resolved[228]: Defaulting to hostname 'linux'. Apr 14 01:10:17.105802 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 14 01:10:17.088100 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 01:10:17.091090 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 01:10:17.172573 kernel: SCSI subsystem initialized Apr 14 01:10:17.180531 kernel: Loading iSCSI transport class v2.0-870. Apr 14 01:10:17.192562 kernel: iscsi: registered transport (tcp) Apr 14 01:10:17.212012 kernel: iscsi: registered transport (qla4xxx) Apr 14 01:10:17.212135 kernel: QLogic iSCSI HBA Driver Apr 14 01:10:17.249892 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 14 01:10:17.257531 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 14 01:10:17.287008 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 14 01:10:17.287526 kernel: device-mapper: uevent: version 1.0.3 Apr 14 01:10:17.290376 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 14 01:10:17.363613 kernel: raid6: avx512x4 gen() 40986 MB/s Apr 14 01:10:17.380720 kernel: raid6: avx512x2 gen() 42401 MB/s Apr 14 01:10:17.397573 kernel: raid6: avx512x1 gen() 42665 MB/s Apr 14 01:10:17.414386 kernel: raid6: avx2x4 gen() 37486 MB/s Apr 14 01:10:17.431645 kernel: raid6: avx2x2 gen() 36484 MB/s Apr 14 01:10:17.449266 kernel: raid6: avx2x1 gen() 28431 MB/s Apr 14 01:10:17.449304 kernel: raid6: using algorithm avx512x1 gen() 42665 MB/s Apr 14 01:10:17.467305 kernel: raid6: .... xor() 28658 MB/s, rmw enabled Apr 14 01:10:17.467651 kernel: raid6: using avx512x2 recovery algorithm Apr 14 01:10:17.488583 kernel: xor: automatically using best checksumming function avx Apr 14 01:10:17.624632 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 14 01:10:17.635937 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 14 01:10:17.646624 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 01:10:17.656170 systemd-udevd[415]: Using default interface naming scheme 'v255'. Apr 14 01:10:17.659152 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 01:10:17.675593 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 14 01:10:17.690368 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Apr 14 01:10:17.717780 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 01:10:17.734755 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 01:10:17.766258 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 01:10:17.779564 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 14 01:10:17.805438 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Apr 14 01:10:17.805600 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Apr 14 01:10:17.805670 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 14 01:10:17.805679 kernel: GPT:9289727 != 19775487 Apr 14 01:10:17.805686 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 14 01:10:17.805692 kernel: GPT:9289727 != 19775487 Apr 14 01:10:17.805704 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 14 01:10:17.805712 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 01:10:17.789291 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 14 01:10:17.793025 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 01:10:17.797564 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 01:10:17.816941 kernel: cryptd: max_cpu_qlen set to 1000 Apr 14 01:10:17.815044 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 01:10:17.828387 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 14 01:10:17.837766 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (460) Apr 14 01:10:17.836597 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 14 01:10:17.844698 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 14 01:10:17.851173 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/vda3 scanned by (udev-worker) (471) Apr 14 01:10:17.854504 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 14 01:10:17.862356 kernel: AVX2 version of gcm_enc/dec engaged. Apr 14 01:10:17.862385 kernel: libata version 3.00 loaded. Apr 14 01:10:17.864365 kernel: AES CTR mode by8 optimization enabled Apr 14 01:10:17.869973 kernel: ahci 0000:00:1f.2: version 3.0 Apr 14 01:10:17.870255 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 14 01:10:17.873529 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 14 01:10:17.873692 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 14 01:10:17.873872 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 01:10:17.880871 kernel: scsi host0: ahci Apr 14 01:10:17.880251 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 14 01:10:17.894115 kernel: scsi host1: ahci Apr 14 01:10:17.894283 kernel: scsi host2: ahci Apr 14 01:10:17.894477 kernel: scsi host3: ahci Apr 14 01:10:17.894586 kernel: scsi host4: ahci Apr 14 01:10:17.894688 kernel: scsi host5: ahci Apr 14 01:10:17.894792 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Apr 14 01:10:17.894812 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Apr 14 01:10:17.894825 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Apr 14 01:10:17.894837 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Apr 14 01:10:17.894849 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Apr 14 01:10:17.894861 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Apr 14 01:10:17.899096 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 14 01:10:17.917912 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 14 01:10:17.918073 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 01:10:17.918136 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 01:10:17.930394 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 01:10:17.936354 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 01:10:17.948758 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 01:10:17.936432 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 01:10:17.956122 disk-uuid[543]: Primary Header is updated. Apr 14 01:10:17.956122 disk-uuid[543]: Secondary Entries is updated. Apr 14 01:10:17.956122 disk-uuid[543]: Secondary Header is updated. Apr 14 01:10:17.944292 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 01:10:17.960557 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 01:10:18.085650 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 01:10:18.101789 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 14 01:10:18.111970 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 01:10:18.211539 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 14 01:10:18.211691 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 14 01:10:18.211706 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Apr 14 01:10:18.214517 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 14 01:10:18.214616 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 14 01:10:18.216469 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Apr 14 01:10:18.217972 kernel: ata3.00: applying bridge limits Apr 14 01:10:18.218556 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 14 01:10:18.221432 kernel: ata3.00: configured for UDMA/100 Apr 14 01:10:18.221586 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Apr 14 01:10:18.264473 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Apr 14 01:10:18.264739 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Apr 14 01:10:18.282364 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Apr 14 01:10:18.954277 disk-uuid[550]: The operation has completed successfully. Apr 14 01:10:18.956834 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 14 01:10:18.983979 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 14 01:10:18.984107 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 14 01:10:19.008006 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 14 01:10:19.013379 sh[592]: Success Apr 14 01:10:19.027363 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Apr 14 01:10:19.061181 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 14 01:10:19.078889 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 14 01:10:19.080869 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 14 01:10:19.094827 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 14 01:10:19.094862 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 14 01:10:19.094872 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 14 01:10:19.097777 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 14 01:10:19.097802 kernel: BTRFS info (device dm-0): using free space tree Apr 14 01:10:19.105173 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 14 01:10:19.109631 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 14 01:10:19.118669 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 14 01:10:19.121733 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 14 01:10:19.132119 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 01:10:19.132153 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 01:10:19.132163 kernel: BTRFS info (device vda6): using free space tree Apr 14 01:10:19.137468 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 01:10:19.145882 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 14 01:10:19.149292 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 01:10:19.154348 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 14 01:10:19.160864 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 14 01:10:19.219216 ignition[684]: Ignition 2.19.0 Apr 14 01:10:19.220021 ignition[684]: Stage: fetch-offline Apr 14 01:10:19.220051 ignition[684]: no configs at "/usr/lib/ignition/base.d" Apr 14 01:10:19.220058 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 01:10:19.220138 ignition[684]: parsed url from cmdline: "" Apr 14 01:10:19.220140 ignition[684]: no config URL provided Apr 14 01:10:19.220144 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Apr 14 01:10:19.220149 ignition[684]: no config at "/usr/lib/ignition/user.ign" Apr 14 01:10:19.220168 ignition[684]: op(1): [started] loading QEMU firmware config module Apr 14 01:10:19.220172 ignition[684]: op(1): executing: "modprobe" "qemu_fw_cfg" Apr 14 01:10:19.236118 ignition[684]: op(1): [finished] loading QEMU firmware config module Apr 14 01:10:19.244883 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 01:10:19.254803 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 01:10:19.279614 systemd-networkd[780]: lo: Link UP Apr 14 01:10:19.279639 systemd-networkd[780]: lo: Gained carrier Apr 14 01:10:19.280645 systemd-networkd[780]: Enumeration completed Apr 14 01:10:19.280913 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 01:10:19.281225 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 01:10:19.281227 systemd-networkd[780]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 01:10:19.281910 systemd-networkd[780]: eth0: Link UP Apr 14 01:10:19.281912 systemd-networkd[780]: eth0: Gained carrier Apr 14 01:10:19.281918 systemd-networkd[780]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 01:10:19.283242 systemd[1]: Reached target network.target - Network. Apr 14 01:10:19.297453 systemd-networkd[780]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 01:10:19.378754 ignition[684]: parsing config with SHA512: edee9b791aa988f639c946714b36e6e69d6b02c76a3c203784bb2c6d79289c949a9539e78b94276d445005bd662aabcbd0357cfae61d5ee724742262e62a2d67 Apr 14 01:10:19.386292 unknown[684]: fetched base config from "system" Apr 14 01:10:19.386304 unknown[684]: fetched user config from "qemu" Apr 14 01:10:19.391549 ignition[684]: fetch-offline: fetch-offline passed Apr 14 01:10:19.391660 ignition[684]: Ignition finished successfully Apr 14 01:10:19.398155 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 01:10:19.398459 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Apr 14 01:10:19.412807 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 14 01:10:19.429944 ignition[785]: Ignition 2.19.0 Apr 14 01:10:19.429964 ignition[785]: Stage: kargs Apr 14 01:10:19.430110 ignition[785]: no configs at "/usr/lib/ignition/base.d" Apr 14 01:10:19.430117 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 01:10:19.432410 ignition[785]: kargs: kargs passed Apr 14 01:10:19.432533 ignition[785]: Ignition finished successfully Apr 14 01:10:19.441662 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 14 01:10:19.453576 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 14 01:10:19.470151 ignition[793]: Ignition 2.19.0 Apr 14 01:10:19.470171 ignition[793]: Stage: disks Apr 14 01:10:19.470303 ignition[793]: no configs at "/usr/lib/ignition/base.d" Apr 14 01:10:19.470310 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 01:10:19.477008 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 14 01:10:19.470988 ignition[793]: disks: disks passed Apr 14 01:10:19.481691 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 14 01:10:19.471018 ignition[793]: Ignition finished successfully Apr 14 01:10:19.484832 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 14 01:10:19.487254 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 01:10:19.488501 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 01:10:19.488652 systemd[1]: Reached target basic.target - Basic System. Apr 14 01:10:19.514019 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 14 01:10:19.525928 systemd-fsck[803]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 14 01:10:19.531467 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 14 01:10:19.537169 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 14 01:10:19.634666 kernel: EXT4-fs (vda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 14 01:10:19.635183 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 14 01:10:19.638989 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 14 01:10:19.655821 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 01:10:19.660189 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 14 01:10:19.661256 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 14 01:10:19.661300 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 14 01:10:19.661353 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 01:10:19.681808 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (811) Apr 14 01:10:19.681828 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 01:10:19.681836 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 01:10:19.681843 kernel: BTRFS info (device vda6): using free space tree Apr 14 01:10:19.682378 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 14 01:10:19.683666 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 14 01:10:19.690632 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 01:10:19.691764 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 01:10:19.735792 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Apr 14 01:10:19.739036 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Apr 14 01:10:19.742805 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Apr 14 01:10:19.748019 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Apr 14 01:10:19.829542 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 14 01:10:19.845538 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 14 01:10:19.849065 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 14 01:10:19.855405 kernel: BTRFS info (device vda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 01:10:19.876281 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 14 01:10:19.882810 ignition[925]: INFO : Ignition 2.19.0 Apr 14 01:10:19.882810 ignition[925]: INFO : Stage: mount Apr 14 01:10:19.885985 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 01:10:19.885985 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 01:10:19.885985 ignition[925]: INFO : mount: mount passed Apr 14 01:10:19.885985 ignition[925]: INFO : Ignition finished successfully Apr 14 01:10:19.888785 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 14 01:10:19.904085 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 14 01:10:20.093267 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 14 01:10:20.112937 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 14 01:10:20.126525 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (938) Apr 14 01:10:20.130084 kernel: BTRFS info (device vda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 14 01:10:20.130165 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 14 01:10:20.130183 kernel: BTRFS info (device vda6): using free space tree Apr 14 01:10:20.135485 kernel: BTRFS info (device vda6): auto enabling async discard Apr 14 01:10:20.137930 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 14 01:10:20.171802 ignition[955]: INFO : Ignition 2.19.0 Apr 14 01:10:20.171802 ignition[955]: INFO : Stage: files Apr 14 01:10:20.171802 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 01:10:20.171802 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 01:10:20.178593 ignition[955]: DEBUG : files: compiled without relabeling support, skipping Apr 14 01:10:20.180623 ignition[955]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 14 01:10:20.180623 ignition[955]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 14 01:10:20.186367 ignition[955]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 14 01:10:20.188999 ignition[955]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 14 01:10:20.191309 unknown[955]: wrote ssh authorized keys file for user: core Apr 14 01:10:20.193177 ignition[955]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 14 01:10:20.193177 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 14 01:10:20.193177 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 14 01:10:20.193177 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 01:10:20.193177 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 14 01:10:20.257877 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 14 01:10:20.356929 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 14 01:10:20.356929 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 14 01:10:20.363840 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 14 01:10:20.363840 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 14 01:10:20.363840 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 14 01:10:20.363840 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 01:10:20.363840 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 14 01:10:20.363840 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 01:10:20.363840 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 14 01:10:20.363840 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 01:10:20.363840 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 14 01:10:20.363840 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 01:10:20.363840 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 01:10:20.363840 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 01:10:20.363840 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 14 01:10:20.417839 systemd-networkd[780]: eth0: Gained IPv6LL Apr 14 01:10:20.670107 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 14 01:10:20.837120 ignition[955]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 14 01:10:20.837120 ignition[955]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 14 01:10:20.842948 ignition[955]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 14 01:10:20.842948 ignition[955]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 14 01:10:20.842948 ignition[955]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 14 01:10:20.842948 ignition[955]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 14 01:10:20.842948 ignition[955]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 01:10:20.842948 ignition[955]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 14 01:10:20.842948 ignition[955]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 14 01:10:20.842948 ignition[955]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Apr 14 01:10:20.842948 ignition[955]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 01:10:20.842948 ignition[955]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Apr 14 01:10:20.842948 ignition[955]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Apr 14 01:10:20.842948 ignition[955]: INFO : files: op(12): [started] setting preset to disabled for "coreos-metadata.service" Apr 14 01:10:20.884175 ignition[955]: INFO : files: op(12): op(13): [started] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 01:10:20.884175 ignition[955]: INFO : files: op(12): op(13): [finished] removing enablement symlink(s) for "coreos-metadata.service" Apr 14 01:10:20.884175 ignition[955]: INFO : files: op(12): [finished] setting preset to disabled for "coreos-metadata.service" Apr 14 01:10:20.884175 ignition[955]: INFO : files: op(14): [started] setting preset to enabled for "prepare-helm.service" Apr 14 01:10:20.884175 ignition[955]: INFO : files: op(14): [finished] setting preset to enabled for "prepare-helm.service" Apr 14 01:10:20.884175 ignition[955]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 14 01:10:20.884175 ignition[955]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 14 01:10:20.884175 ignition[955]: INFO : files: files passed Apr 14 01:10:20.884175 ignition[955]: INFO : Ignition finished successfully Apr 14 01:10:20.864182 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 14 01:10:20.876598 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 14 01:10:20.880800 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 14 01:10:20.920794 initrd-setup-root-after-ignition[982]: grep: /sysroot/oem/oem-release: No such file or directory Apr 14 01:10:20.884273 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 14 01:10:20.925250 initrd-setup-root-after-ignition[984]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 01:10:20.925250 initrd-setup-root-after-ignition[984]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 14 01:10:20.884387 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 14 01:10:20.934777 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 14 01:10:20.895651 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 01:10:20.899177 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 14 01:10:20.904674 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 14 01:10:20.931418 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 14 01:10:20.931541 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 14 01:10:20.934877 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 14 01:10:20.937691 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 14 01:10:20.937920 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 14 01:10:20.939063 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 14 01:10:20.953929 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 01:10:20.970841 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 14 01:10:20.980222 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 14 01:10:20.980439 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 01:10:20.984949 systemd[1]: Stopped target timers.target - Timer Units. Apr 14 01:10:20.990862 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 14 01:10:20.991053 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 14 01:10:20.995494 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 14 01:10:20.998702 systemd[1]: Stopped target basic.target - Basic System. Apr 14 01:10:21.002698 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 14 01:10:21.004224 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 14 01:10:21.009875 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 14 01:10:21.012378 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 14 01:10:21.015304 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 14 01:10:21.021731 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 14 01:10:21.025837 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 14 01:10:21.030926 systemd[1]: Stopped target swap.target - Swaps. Apr 14 01:10:21.034988 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 14 01:10:21.035188 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 14 01:10:21.044508 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 14 01:10:21.049555 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 01:10:21.053238 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 14 01:10:21.053508 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 01:10:21.057566 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 14 01:10:21.057792 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 14 01:10:21.064585 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 14 01:10:21.064822 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 14 01:10:21.069017 systemd[1]: Stopped target paths.target - Path Units. Apr 14 01:10:21.070715 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 14 01:10:21.075981 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 01:10:21.084091 systemd[1]: Stopped target slices.target - Slice Units. Apr 14 01:10:21.085781 systemd[1]: Stopped target sockets.target - Socket Units. Apr 14 01:10:21.089166 systemd[1]: iscsid.socket: Deactivated successfully. Apr 14 01:10:21.089298 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 14 01:10:21.093701 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 14 01:10:21.093764 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 14 01:10:21.097032 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 14 01:10:21.097279 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 14 01:10:21.100375 systemd[1]: ignition-files.service: Deactivated successfully. Apr 14 01:10:21.100622 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 14 01:10:21.122257 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 14 01:10:21.123442 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 14 01:10:21.127690 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 14 01:10:21.127899 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 01:10:21.130896 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 14 01:10:21.131039 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 14 01:10:21.134604 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 14 01:10:21.134693 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 14 01:10:21.141123 ignition[1009]: INFO : Ignition 2.19.0 Apr 14 01:10:21.141123 ignition[1009]: INFO : Stage: umount Apr 14 01:10:21.141123 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 14 01:10:21.141123 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Apr 14 01:10:21.142299 ignition[1009]: INFO : umount: umount passed Apr 14 01:10:21.142299 ignition[1009]: INFO : Ignition finished successfully Apr 14 01:10:21.144639 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 14 01:10:21.144755 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 14 01:10:21.145299 systemd[1]: Stopped target network.target - Network. Apr 14 01:10:21.145742 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 14 01:10:21.145789 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 14 01:10:21.146096 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 14 01:10:21.146120 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 14 01:10:21.147095 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 14 01:10:21.147129 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 14 01:10:21.147240 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 14 01:10:21.147264 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 14 01:10:21.147951 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 14 01:10:21.148126 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 14 01:10:21.187136 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 14 01:10:21.187349 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 14 01:10:21.191837 systemd-networkd[780]: eth0: DHCPv6 lease lost Apr 14 01:10:21.192716 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 14 01:10:21.192792 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 01:10:21.197049 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 14 01:10:21.197165 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 14 01:10:21.201729 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 14 01:10:21.201775 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 14 01:10:21.219582 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 14 01:10:21.224815 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 14 01:10:21.225022 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 14 01:10:21.230049 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 14 01:10:21.230101 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 14 01:10:21.235699 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 14 01:10:21.235800 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 14 01:10:21.240538 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 01:10:21.249235 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 14 01:10:21.268517 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 14 01:10:21.270806 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 01:10:21.271622 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 14 01:10:21.271712 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 14 01:10:21.278290 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 14 01:10:21.278640 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 14 01:10:21.284673 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 14 01:10:21.284724 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 14 01:10:21.288225 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 14 01:10:21.288263 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 01:10:21.292281 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 14 01:10:21.292417 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 14 01:10:21.300989 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 14 01:10:21.301102 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 14 01:10:21.308169 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 14 01:10:21.308291 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 14 01:10:21.317769 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 14 01:10:21.317908 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 14 01:10:21.337876 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 14 01:10:21.341831 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 14 01:10:21.341944 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 01:10:21.346070 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 14 01:10:21.346221 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 01:10:21.351777 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 14 01:10:21.352096 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 01:10:21.355174 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 14 01:10:21.355618 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 01:10:21.364734 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 14 01:10:21.364962 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 14 01:10:21.367960 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 14 01:10:21.382842 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 14 01:10:21.389858 systemd[1]: Switching root. Apr 14 01:10:21.422790 systemd-journald[194]: Journal stopped Apr 14 01:10:22.283157 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Apr 14 01:10:22.283208 kernel: SELinux: policy capability network_peer_controls=1 Apr 14 01:10:22.283222 kernel: SELinux: policy capability open_perms=1 Apr 14 01:10:22.283230 kernel: SELinux: policy capability extended_socket_class=1 Apr 14 01:10:22.283238 kernel: SELinux: policy capability always_check_network=0 Apr 14 01:10:22.283245 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 14 01:10:22.283253 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 14 01:10:22.283263 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 14 01:10:22.283270 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 14 01:10:22.283278 kernel: audit: type=1403 audit(1776129021.595:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 14 01:10:22.283286 systemd[1]: Successfully loaded SELinux policy in 33.795ms. Apr 14 01:10:22.283302 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.088ms. Apr 14 01:10:22.283383 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 14 01:10:22.283395 systemd[1]: Detected virtualization kvm. Apr 14 01:10:22.283403 systemd[1]: Detected architecture x86-64. Apr 14 01:10:22.283410 systemd[1]: Detected first boot. Apr 14 01:10:22.283421 systemd[1]: Initializing machine ID from VM UUID. Apr 14 01:10:22.283450 zram_generator::config[1070]: No configuration found. Apr 14 01:10:22.283462 systemd[1]: Populated /etc with preset unit settings. Apr 14 01:10:22.283470 systemd[1]: Queued start job for default target multi-user.target. Apr 14 01:10:22.283478 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 14 01:10:22.283486 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 14 01:10:22.283494 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 14 01:10:22.283502 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 14 01:10:22.283518 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 14 01:10:22.283531 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 14 01:10:22.283542 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 14 01:10:22.283550 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 14 01:10:22.283559 systemd[1]: Created slice user.slice - User and Session Slice. Apr 14 01:10:22.283566 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 14 01:10:22.283574 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 14 01:10:22.283582 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 14 01:10:22.283630 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 14 01:10:22.283642 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 14 01:10:22.283650 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 14 01:10:22.283657 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 14 01:10:22.283665 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 14 01:10:22.283673 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 14 01:10:22.283681 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 14 01:10:22.283694 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 14 01:10:22.283702 systemd[1]: Reached target slices.target - Slice Units. Apr 14 01:10:22.283712 systemd[1]: Reached target swap.target - Swaps. Apr 14 01:10:22.283720 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 14 01:10:22.283727 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 14 01:10:22.283735 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 14 01:10:22.283743 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 14 01:10:22.283750 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 14 01:10:22.283758 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 14 01:10:22.283766 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 14 01:10:22.283774 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 14 01:10:22.283783 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 14 01:10:22.283790 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 14 01:10:22.283798 systemd[1]: Mounting media.mount - External Media Directory... Apr 14 01:10:22.283805 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:10:22.283814 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 14 01:10:22.283821 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 14 01:10:22.283829 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 14 01:10:22.283840 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 14 01:10:22.283848 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 01:10:22.283857 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 14 01:10:22.283865 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 14 01:10:22.283873 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 01:10:22.283881 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 01:10:22.283888 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 01:10:22.283896 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 14 01:10:22.283903 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 01:10:22.283912 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 14 01:10:22.283921 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 14 01:10:22.283930 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 14 01:10:22.283937 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 14 01:10:22.283945 kernel: fuse: init (API version 7.39) Apr 14 01:10:22.283952 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 14 01:10:22.283960 kernel: loop: module loaded Apr 14 01:10:22.283967 kernel: ACPI: bus type drm_connector registered Apr 14 01:10:22.283992 systemd-journald[1162]: Collecting audit messages is disabled. Apr 14 01:10:22.284010 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 14 01:10:22.284018 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 14 01:10:22.284026 systemd-journald[1162]: Journal started Apr 14 01:10:22.284043 systemd-journald[1162]: Runtime Journal (/run/log/journal/43d8bc9104cd4b9ca42a13d3d48a6513) is 6.0M, max 48.4M, 42.3M free. Apr 14 01:10:22.290359 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 14 01:10:22.296378 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:10:22.299379 systemd[1]: Started systemd-journald.service - Journal Service. Apr 14 01:10:22.301947 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 14 01:10:22.304768 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 14 01:10:22.307471 systemd[1]: Mounted media.mount - External Media Directory. Apr 14 01:10:22.309892 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 14 01:10:22.313045 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 14 01:10:22.316612 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 14 01:10:22.319312 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 14 01:10:22.322980 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 14 01:10:22.326662 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 14 01:10:22.326811 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 14 01:10:22.329845 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 01:10:22.329983 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 01:10:22.332931 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 01:10:22.333130 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 01:10:22.336095 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 01:10:22.336295 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 01:10:22.339706 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 14 01:10:22.339906 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 14 01:10:22.343483 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 01:10:22.343695 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 01:10:22.347145 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 14 01:10:22.350461 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 14 01:10:22.353605 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 14 01:10:22.356655 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 14 01:10:22.369633 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 14 01:10:22.379646 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 14 01:10:22.383785 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 14 01:10:22.386594 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 14 01:10:22.388412 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 14 01:10:22.392766 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 14 01:10:22.396009 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 01:10:22.397208 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 14 01:10:22.399637 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 01:10:22.399864 systemd-journald[1162]: Time spent on flushing to /var/log/journal/43d8bc9104cd4b9ca42a13d3d48a6513 is 17.017ms for 939 entries. Apr 14 01:10:22.399864 systemd-journald[1162]: System Journal (/var/log/journal/43d8bc9104cd4b9ca42a13d3d48a6513) is 8.0M, max 195.6M, 187.6M free. Apr 14 01:10:22.422618 systemd-journald[1162]: Received client request to flush runtime journal. Apr 14 01:10:22.400930 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 14 01:10:22.408178 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 14 01:10:22.412655 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 14 01:10:22.417825 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 14 01:10:22.420509 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 14 01:10:22.423640 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 14 01:10:22.426854 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 14 01:10:22.431165 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 14 01:10:22.438041 udevadm[1210]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 14 01:10:22.440448 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Apr 14 01:10:22.440460 systemd-tmpfiles[1209]: ACLs are not supported, ignoring. Apr 14 01:10:22.441888 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 14 01:10:22.444052 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 14 01:10:22.458718 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 14 01:10:22.483065 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 14 01:10:22.490587 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 14 01:10:22.506817 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Apr 14 01:10:22.506865 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Apr 14 01:10:22.510915 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 14 01:10:22.809505 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 14 01:10:22.822185 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 14 01:10:22.842075 systemd-udevd[1236]: Using default interface naming scheme 'v255'. Apr 14 01:10:22.864072 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 14 01:10:22.873845 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 14 01:10:22.892517 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 14 01:10:22.914488 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1242) Apr 14 01:10:22.916742 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 14 01:10:22.943284 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 14 01:10:22.974150 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 14 01:10:22.983377 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 14 01:10:22.993409 kernel: ACPI: button: Power Button [PWRF] Apr 14 01:10:22.996716 systemd-networkd[1244]: lo: Link UP Apr 14 01:10:22.998381 systemd-networkd[1244]: lo: Gained carrier Apr 14 01:10:23.000049 systemd-networkd[1244]: Enumeration completed Apr 14 01:10:23.004481 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 14 01:10:23.004828 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 14 01:10:23.004973 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 14 01:10:23.002471 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 14 01:10:23.003025 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 01:10:23.003028 systemd-networkd[1244]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 14 01:10:23.003743 systemd-networkd[1244]: eth0: Link UP Apr 14 01:10:23.003745 systemd-networkd[1244]: eth0: Gained carrier Apr 14 01:10:23.003756 systemd-networkd[1244]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 14 01:10:23.014360 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 14 01:10:23.017522 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 14 01:10:23.021458 systemd-networkd[1244]: eth0: DHCPv4 address 10.0.0.8/16, gateway 10.0.0.1 acquired from 10.0.0.1 Apr 14 01:10:23.054901 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 14 01:10:23.114402 kernel: mousedev: PS/2 mouse device common for all mice Apr 14 01:10:23.209425 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 14 01:10:23.277993 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 14 01:10:23.290572 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 14 01:10:23.299228 lvm[1282]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 01:10:23.340203 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 14 01:10:23.342740 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 14 01:10:23.357844 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 14 01:10:23.365280 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 14 01:10:23.405989 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 14 01:10:23.408570 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 14 01:10:23.411495 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 14 01:10:23.411529 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 14 01:10:23.413361 systemd[1]: Reached target machines.target - Containers. Apr 14 01:10:23.416591 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 14 01:10:23.429720 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 14 01:10:23.433779 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 14 01:10:23.435746 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 01:10:23.437250 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 14 01:10:23.440618 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 14 01:10:23.446620 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 14 01:10:23.449720 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 14 01:10:23.452834 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 14 01:10:23.457346 kernel: loop0: detected capacity change from 0 to 228704 Apr 14 01:10:23.464753 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 14 01:10:23.465248 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 14 01:10:23.478378 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 14 01:10:23.513473 kernel: loop1: detected capacity change from 0 to 142488 Apr 14 01:10:23.542557 kernel: loop2: detected capacity change from 0 to 140768 Apr 14 01:10:23.575410 kernel: loop3: detected capacity change from 0 to 228704 Apr 14 01:10:23.585374 kernel: loop4: detected capacity change from 0 to 142488 Apr 14 01:10:23.595345 kernel: loop5: detected capacity change from 0 to 140768 Apr 14 01:10:23.604546 (sd-merge)[1305]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Apr 14 01:10:23.604976 (sd-merge)[1305]: Merged extensions into '/usr'. Apr 14 01:10:23.607709 systemd[1]: Reloading requested from client PID 1293 ('systemd-sysext') (unit systemd-sysext.service)... Apr 14 01:10:23.607736 systemd[1]: Reloading... Apr 14 01:10:23.647593 zram_generator::config[1334]: No configuration found. Apr 14 01:10:23.691464 ldconfig[1290]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 14 01:10:23.741729 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 01:10:23.784944 systemd[1]: Reloading finished in 176 ms. Apr 14 01:10:23.803393 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 14 01:10:23.806012 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 14 01:10:23.822870 systemd[1]: Starting ensure-sysext.service... Apr 14 01:10:23.825624 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 14 01:10:23.832754 systemd[1]: Reloading requested from client PID 1377 ('systemctl') (unit ensure-sysext.service)... Apr 14 01:10:23.832779 systemd[1]: Reloading... Apr 14 01:10:23.846029 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 14 01:10:23.846261 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 14 01:10:23.846837 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 14 01:10:23.847035 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Apr 14 01:10:23.847088 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Apr 14 01:10:23.849040 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 01:10:23.849060 systemd-tmpfiles[1378]: Skipping /boot Apr 14 01:10:23.855816 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Apr 14 01:10:23.855848 systemd-tmpfiles[1378]: Skipping /boot Apr 14 01:10:23.879363 zram_generator::config[1405]: No configuration found. Apr 14 01:10:23.975206 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 01:10:24.014697 systemd[1]: Reloading finished in 181 ms. Apr 14 01:10:24.031304 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 14 01:10:24.056254 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:10:24.057774 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 01:10:24.061254 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 14 01:10:24.063507 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 01:10:24.064570 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 01:10:24.069561 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 01:10:24.074756 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 01:10:24.076619 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 01:10:24.078738 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 14 01:10:24.083688 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 14 01:10:24.088577 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 14 01:10:24.090866 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:10:24.094731 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 01:10:24.095004 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 01:10:24.102708 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 01:10:24.102897 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 01:10:24.105535 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 01:10:24.105668 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 01:10:24.108476 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 14 01:10:24.110973 augenrules[1479]: No rules Apr 14 01:10:24.112042 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 01:10:24.123572 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:10:24.123762 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 01:10:24.124870 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 01:10:24.129553 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 01:10:24.133609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 01:10:24.135686 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 01:10:24.136981 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 14 01:10:24.138748 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:10:24.140257 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 14 01:10:24.144834 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 01:10:24.144974 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 01:10:24.147428 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 01:10:24.147569 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 01:10:24.150399 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 01:10:24.150697 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 01:10:24.153024 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 14 01:10:24.156939 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 14 01:10:24.167819 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:10:24.168052 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 14 01:10:24.172676 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 14 01:10:24.179570 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 14 01:10:24.182520 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 14 01:10:24.183511 systemd-resolved[1468]: Positive Trust Anchors: Apr 14 01:10:24.183519 systemd-resolved[1468]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 14 01:10:24.183545 systemd-resolved[1468]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 14 01:10:24.185506 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 14 01:10:24.187264 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 14 01:10:24.187306 systemd-resolved[1468]: Defaulting to hostname 'linux'. Apr 14 01:10:24.187433 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 14 01:10:24.187546 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 14 01:10:24.188502 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 14 01:10:24.188635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 14 01:10:24.190804 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 14 01:10:24.192979 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 14 01:10:24.193114 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 14 01:10:24.195198 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 14 01:10:24.195362 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 14 01:10:24.198928 systemd[1]: Finished ensure-sysext.service. Apr 14 01:10:24.200685 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 14 01:10:24.200852 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 14 01:10:24.206157 systemd[1]: Reached target network.target - Network. Apr 14 01:10:24.207701 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 14 01:10:24.209671 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 14 01:10:24.209741 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 14 01:10:24.221558 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 14 01:10:24.256554 systemd-networkd[1244]: eth0: Gained IPv6LL Apr 14 01:10:24.259119 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 14 01:10:24.261494 systemd[1]: Reached target network-online.target - Network is Online. Apr 14 01:10:24.263785 systemd-timesyncd[1525]: Contacted time server 10.0.0.1:123 (10.0.0.1). Apr 14 01:10:24.263841 systemd-timesyncd[1525]: Initial clock synchronization to Tue 2026-04-14 01:10:24.662863 UTC. Apr 14 01:10:24.263924 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 14 01:10:24.266260 systemd[1]: Reached target sysinit.target - System Initialization. Apr 14 01:10:24.268500 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 14 01:10:24.270660 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 14 01:10:24.273470 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 14 01:10:24.275786 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 14 01:10:24.275831 systemd[1]: Reached target paths.target - Path Units. Apr 14 01:10:24.277468 systemd[1]: Reached target time-set.target - System Time Set. Apr 14 01:10:24.279516 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 14 01:10:24.281407 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 14 01:10:24.283518 systemd[1]: Reached target timers.target - Timer Units. Apr 14 01:10:24.285889 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 14 01:10:24.289498 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 14 01:10:24.292491 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 14 01:10:24.298135 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 14 01:10:24.300630 systemd[1]: Reached target sockets.target - Socket Units. Apr 14 01:10:24.302265 systemd[1]: Reached target basic.target - Basic System. Apr 14 01:10:24.303870 systemd[1]: System is tainted: cgroupsv1 Apr 14 01:10:24.303915 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 14 01:10:24.303929 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 14 01:10:24.304870 systemd[1]: Starting containerd.service - containerd container runtime... Apr 14 01:10:24.307397 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Apr 14 01:10:24.309911 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 14 01:10:24.313480 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 14 01:10:24.317678 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 14 01:10:24.319435 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 14 01:10:24.321159 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:10:24.323289 jq[1535]: false Apr 14 01:10:24.326421 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 14 01:10:24.333550 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 14 01:10:24.335880 extend-filesystems[1538]: Found loop3 Apr 14 01:10:24.337589 extend-filesystems[1538]: Found loop4 Apr 14 01:10:24.337589 extend-filesystems[1538]: Found loop5 Apr 14 01:10:24.337589 extend-filesystems[1538]: Found sr0 Apr 14 01:10:24.337589 extend-filesystems[1538]: Found vda Apr 14 01:10:24.337589 extend-filesystems[1538]: Found vda1 Apr 14 01:10:24.337589 extend-filesystems[1538]: Found vda2 Apr 14 01:10:24.337589 extend-filesystems[1538]: Found vda3 Apr 14 01:10:24.337589 extend-filesystems[1538]: Found usr Apr 14 01:10:24.337589 extend-filesystems[1538]: Found vda4 Apr 14 01:10:24.337589 extend-filesystems[1538]: Found vda6 Apr 14 01:10:24.337589 extend-filesystems[1538]: Found vda7 Apr 14 01:10:24.337589 extend-filesystems[1538]: Found vda9 Apr 14 01:10:24.337589 extend-filesystems[1538]: Checking size of /dev/vda9 Apr 14 01:10:24.370721 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (1252) Apr 14 01:10:24.370744 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Apr 14 01:10:24.336926 dbus-daemon[1534]: [system] SELinux support is enabled Apr 14 01:10:24.396846 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Apr 14 01:10:24.396884 extend-filesystems[1538]: Resized partition /dev/vda9 Apr 14 01:10:24.356255 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 14 01:10:24.411849 extend-filesystems[1561]: resize2fs 1.47.1 (20-May-2024) Apr 14 01:10:24.411849 extend-filesystems[1561]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 14 01:10:24.411849 extend-filesystems[1561]: old_desc_blocks = 1, new_desc_blocks = 1 Apr 14 01:10:24.411849 extend-filesystems[1561]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Apr 14 01:10:24.385532 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 14 01:10:24.431272 extend-filesystems[1538]: Resized filesystem in /dev/vda9 Apr 14 01:10:24.389072 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 14 01:10:24.411746 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 14 01:10:24.413788 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 14 01:10:24.414960 systemd[1]: Starting update-engine.service - Update Engine... Apr 14 01:10:24.421578 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 14 01:10:24.423948 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 14 01:10:24.433877 jq[1574]: true Apr 14 01:10:24.434848 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 14 01:10:24.435055 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 14 01:10:24.435242 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 14 01:10:24.435475 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 14 01:10:24.438496 update_engine[1570]: I20260414 01:10:24.438215 1570 main.cc:92] Flatcar Update Engine starting Apr 14 01:10:24.439905 update_engine[1570]: I20260414 01:10:24.439877 1570 update_check_scheduler.cc:74] Next update check in 6m22s Apr 14 01:10:24.440651 systemd[1]: motdgen.service: Deactivated successfully. Apr 14 01:10:24.442610 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 14 01:10:24.444730 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 14 01:10:24.448568 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 14 01:10:24.448740 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 14 01:10:24.462605 jq[1583]: true Apr 14 01:10:24.463489 (ntainerd)[1584]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 14 01:10:24.470248 systemd[1]: coreos-metadata.service: Deactivated successfully. Apr 14 01:10:24.470572 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Apr 14 01:10:24.475275 systemd-logind[1568]: Watching system buttons on /dev/input/event1 (Power Button) Apr 14 01:10:24.475569 systemd-logind[1568]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 14 01:10:24.476076 systemd-logind[1568]: New seat seat0. Apr 14 01:10:24.481932 systemd[1]: Started systemd-logind.service - User Login Management. Apr 14 01:10:24.490961 tar[1582]: linux-amd64/LICENSE Apr 14 01:10:24.490872 systemd[1]: Started update-engine.service - Update Engine. Apr 14 01:10:24.491286 tar[1582]: linux-amd64/helm Apr 14 01:10:24.497153 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 14 01:10:24.497515 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 14 01:10:24.497751 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 14 01:10:24.500682 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 14 01:10:24.500802 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 14 01:10:24.504272 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 14 01:10:24.511109 bash[1617]: Updated "/home/core/.ssh/authorized_keys" Apr 14 01:10:24.513717 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 14 01:10:24.521521 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 14 01:10:24.526109 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Apr 14 01:10:24.561123 locksmithd[1618]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 14 01:10:24.598406 sshd_keygen[1575]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 14 01:10:24.619717 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 14 01:10:24.630866 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 14 01:10:24.638472 systemd[1]: issuegen.service: Deactivated successfully. Apr 14 01:10:24.638663 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 14 01:10:24.648619 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 14 01:10:24.658750 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 14 01:10:24.668076 containerd[1584]: time="2026-04-14T01:10:24.668021871Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 14 01:10:24.668202 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 14 01:10:24.676730 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 14 01:10:24.679538 systemd[1]: Reached target getty.target - Login Prompts. Apr 14 01:10:24.689274 containerd[1584]: time="2026-04-14T01:10:24.689061681Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 14 01:10:24.691008 containerd[1584]: time="2026-04-14T01:10:24.690883544Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 14 01:10:24.691008 containerd[1584]: time="2026-04-14T01:10:24.690986535Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 14 01:10:24.691008 containerd[1584]: time="2026-04-14T01:10:24.691003053Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 14 01:10:24.691961 containerd[1584]: time="2026-04-14T01:10:24.691124247Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 14 01:10:24.691961 containerd[1584]: time="2026-04-14T01:10:24.691136889Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 14 01:10:24.691961 containerd[1584]: time="2026-04-14T01:10:24.691177660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 01:10:24.691961 containerd[1584]: time="2026-04-14T01:10:24.691187105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 14 01:10:24.691961 containerd[1584]: time="2026-04-14T01:10:24.691418034Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 01:10:24.691961 containerd[1584]: time="2026-04-14T01:10:24.691429587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 14 01:10:24.691961 containerd[1584]: time="2026-04-14T01:10:24.691458962Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 01:10:24.691961 containerd[1584]: time="2026-04-14T01:10:24.691466806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 14 01:10:24.691961 containerd[1584]: time="2026-04-14T01:10:24.691533236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 14 01:10:24.691961 containerd[1584]: time="2026-04-14T01:10:24.691675133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 14 01:10:24.691961 containerd[1584]: time="2026-04-14T01:10:24.691776241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 14 01:10:24.692152 containerd[1584]: time="2026-04-14T01:10:24.691784975Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 14 01:10:24.692152 containerd[1584]: time="2026-04-14T01:10:24.691835718Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 14 01:10:24.692152 containerd[1584]: time="2026-04-14T01:10:24.691863296Z" level=info msg="metadata content store policy set" policy=shared Apr 14 01:10:24.697565 containerd[1584]: time="2026-04-14T01:10:24.697459137Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 14 01:10:24.697645 containerd[1584]: time="2026-04-14T01:10:24.697575285Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 14 01:10:24.697645 containerd[1584]: time="2026-04-14T01:10:24.697594974Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 14 01:10:24.697645 containerd[1584]: time="2026-04-14T01:10:24.697607183Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 14 01:10:24.697645 containerd[1584]: time="2026-04-14T01:10:24.697619714Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 14 01:10:24.697747 containerd[1584]: time="2026-04-14T01:10:24.697719711Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 14 01:10:24.698164 containerd[1584]: time="2026-04-14T01:10:24.697975073Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 14 01:10:24.698164 containerd[1584]: time="2026-04-14T01:10:24.698043641Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 14 01:10:24.698164 containerd[1584]: time="2026-04-14T01:10:24.698053985Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 14 01:10:24.698164 containerd[1584]: time="2026-04-14T01:10:24.698062843Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 14 01:10:24.698164 containerd[1584]: time="2026-04-14T01:10:24.698078315Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 14 01:10:24.698164 containerd[1584]: time="2026-04-14T01:10:24.698088531Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 14 01:10:24.698164 containerd[1584]: time="2026-04-14T01:10:24.698096831Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 14 01:10:24.698164 containerd[1584]: time="2026-04-14T01:10:24.698106474Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 14 01:10:24.698164 containerd[1584]: time="2026-04-14T01:10:24.698115608Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 14 01:10:24.698164 containerd[1584]: time="2026-04-14T01:10:24.698124847Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 14 01:10:24.698164 containerd[1584]: time="2026-04-14T01:10:24.698133139Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 14 01:10:24.698164 containerd[1584]: time="2026-04-14T01:10:24.698141853Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 14 01:10:24.698164 containerd[1584]: time="2026-04-14T01:10:24.698156715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.698164 containerd[1584]: time="2026-04-14T01:10:24.698166885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.698391 containerd[1584]: time="2026-04-14T01:10:24.698178273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.698391 containerd[1584]: time="2026-04-14T01:10:24.698188871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.698391 containerd[1584]: time="2026-04-14T01:10:24.698202015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.698391 containerd[1584]: time="2026-04-14T01:10:24.698211758Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.698391 containerd[1584]: time="2026-04-14T01:10:24.698220069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.698391 containerd[1584]: time="2026-04-14T01:10:24.698230239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.698391 containerd[1584]: time="2026-04-14T01:10:24.698244696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.698391 containerd[1584]: time="2026-04-14T01:10:24.698256855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.698391 containerd[1584]: time="2026-04-14T01:10:24.698265223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.698391 containerd[1584]: time="2026-04-14T01:10:24.698274504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.698391 containerd[1584]: time="2026-04-14T01:10:24.698283877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.698391 containerd[1584]: time="2026-04-14T01:10:24.698294328Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 14 01:10:24.698391 containerd[1584]: time="2026-04-14T01:10:24.698308790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.698391 containerd[1584]: time="2026-04-14T01:10:24.698359600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.698391 containerd[1584]: time="2026-04-14T01:10:24.698368381Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 14 01:10:24.698592 containerd[1584]: time="2026-04-14T01:10:24.698399434Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 14 01:10:24.698592 containerd[1584]: time="2026-04-14T01:10:24.698411613Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 14 01:10:24.698592 containerd[1584]: time="2026-04-14T01:10:24.698419936Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 14 01:10:24.698592 containerd[1584]: time="2026-04-14T01:10:24.698428316Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 14 01:10:24.698592 containerd[1584]: time="2026-04-14T01:10:24.698434878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.698592 containerd[1584]: time="2026-04-14T01:10:24.698463073Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 14 01:10:24.698592 containerd[1584]: time="2026-04-14T01:10:24.698473351Z" level=info msg="NRI interface is disabled by configuration." Apr 14 01:10:24.698592 containerd[1584]: time="2026-04-14T01:10:24.698481048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 14 01:10:24.698862 containerd[1584]: time="2026-04-14T01:10:24.698795649Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 14 01:10:24.698862 containerd[1584]: time="2026-04-14T01:10:24.698841404Z" level=info msg="Connect containerd service" Apr 14 01:10:24.699006 containerd[1584]: time="2026-04-14T01:10:24.698868278Z" level=info msg="using legacy CRI server" Apr 14 01:10:24.699006 containerd[1584]: time="2026-04-14T01:10:24.698873663Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 14 01:10:24.699006 containerd[1584]: time="2026-04-14T01:10:24.698942605Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 14 01:10:24.699408 containerd[1584]: time="2026-04-14T01:10:24.699384661Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 14 01:10:24.699621 containerd[1584]: time="2026-04-14T01:10:24.699589248Z" level=info msg="Start subscribing containerd event" Apr 14 01:10:24.699641 containerd[1584]: time="2026-04-14T01:10:24.699635765Z" level=info msg="Start recovering state" Apr 14 01:10:24.699693 containerd[1584]: time="2026-04-14T01:10:24.699676012Z" level=info msg="Start event monitor" Apr 14 01:10:24.699708 containerd[1584]: time="2026-04-14T01:10:24.699697574Z" level=info msg="Start snapshots syncer" Apr 14 01:10:24.699708 containerd[1584]: time="2026-04-14T01:10:24.699705004Z" level=info msg="Start cni network conf syncer for default" Apr 14 01:10:24.699733 containerd[1584]: time="2026-04-14T01:10:24.699710259Z" level=info msg="Start streaming server" Apr 14 01:10:24.700054 containerd[1584]: time="2026-04-14T01:10:24.700033238Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 14 01:10:24.700111 containerd[1584]: time="2026-04-14T01:10:24.700093610Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 14 01:10:24.700848 systemd[1]: Started containerd.service - containerd container runtime. Apr 14 01:10:24.701309 containerd[1584]: time="2026-04-14T01:10:24.700959299Z" level=info msg="containerd successfully booted in 0.033665s" Apr 14 01:10:24.952431 tar[1582]: linux-amd64/README.md Apr 14 01:10:24.969147 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 14 01:10:25.221311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:10:25.224337 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 14 01:10:25.226593 systemd[1]: Startup finished in 5.878s (kernel) + 3.663s (userspace) = 9.542s. Apr 14 01:10:25.226793 (kubelet)[1666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 01:10:26.189583 kubelet[1666]: E0414 01:10:26.189250 1666 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 01:10:26.196982 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 01:10:26.197262 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 01:10:30.011031 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 14 01:10:30.022656 systemd[1]: Started sshd@0-10.0.0.8:22-10.0.0.1:59568.service - OpenSSH per-connection server daemon (10.0.0.1:59568). Apr 14 01:10:30.080000 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 59568 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:10:30.084778 sshd[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:10:30.098874 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 14 01:10:30.114121 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 14 01:10:30.117156 systemd-logind[1568]: New session 1 of user core. Apr 14 01:10:30.131189 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 14 01:10:30.143807 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 14 01:10:30.147210 (systemd)[1685]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 14 01:10:30.234814 systemd[1685]: Queued start job for default target default.target. Apr 14 01:10:30.235198 systemd[1685]: Created slice app.slice - User Application Slice. Apr 14 01:10:30.235234 systemd[1685]: Reached target paths.target - Paths. Apr 14 01:10:30.235243 systemd[1685]: Reached target timers.target - Timers. Apr 14 01:10:30.254713 systemd[1685]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 14 01:10:30.261112 systemd[1685]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 14 01:10:30.261179 systemd[1685]: Reached target sockets.target - Sockets. Apr 14 01:10:30.261188 systemd[1685]: Reached target basic.target - Basic System. Apr 14 01:10:30.261218 systemd[1685]: Reached target default.target - Main User Target. Apr 14 01:10:30.261237 systemd[1685]: Startup finished in 105ms. Apr 14 01:10:30.261868 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 14 01:10:30.263836 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 14 01:10:30.325974 systemd[1]: Started sshd@1-10.0.0.8:22-10.0.0.1:59578.service - OpenSSH per-connection server daemon (10.0.0.1:59578). Apr 14 01:10:30.368924 sshd[1697]: Accepted publickey for core from 10.0.0.1 port 59578 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:10:30.370199 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:10:30.376232 systemd-logind[1568]: New session 2 of user core. Apr 14 01:10:30.391137 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 14 01:10:30.451325 sshd[1697]: pam_unix(sshd:session): session closed for user core Apr 14 01:10:30.460681 systemd[1]: Started sshd@2-10.0.0.8:22-10.0.0.1:59588.service - OpenSSH per-connection server daemon (10.0.0.1:59588). Apr 14 01:10:30.461138 systemd[1]: sshd@1-10.0.0.8:22-10.0.0.1:59578.service: Deactivated successfully. Apr 14 01:10:30.462420 systemd[1]: session-2.scope: Deactivated successfully. Apr 14 01:10:30.462975 systemd-logind[1568]: Session 2 logged out. Waiting for processes to exit. Apr 14 01:10:30.464888 systemd-logind[1568]: Removed session 2. Apr 14 01:10:30.497369 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 59588 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:10:30.501597 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:10:30.509832 systemd-logind[1568]: New session 3 of user core. Apr 14 01:10:30.522609 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 14 01:10:30.577908 sshd[1702]: pam_unix(sshd:session): session closed for user core Apr 14 01:10:30.597282 systemd[1]: Started sshd@3-10.0.0.8:22-10.0.0.1:59598.service - OpenSSH per-connection server daemon (10.0.0.1:59598). Apr 14 01:10:30.598160 systemd[1]: sshd@2-10.0.0.8:22-10.0.0.1:59588.service: Deactivated successfully. Apr 14 01:10:30.600326 systemd[1]: session-3.scope: Deactivated successfully. Apr 14 01:10:30.601192 systemd-logind[1568]: Session 3 logged out. Waiting for processes to exit. Apr 14 01:10:30.603060 systemd-logind[1568]: Removed session 3. Apr 14 01:10:30.625483 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 59598 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:10:30.626986 sshd[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:10:30.631444 systemd-logind[1568]: New session 4 of user core. Apr 14 01:10:30.638240 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 14 01:10:30.698228 sshd[1711]: pam_unix(sshd:session): session closed for user core Apr 14 01:10:30.706672 systemd[1]: Started sshd@4-10.0.0.8:22-10.0.0.1:59608.service - OpenSSH per-connection server daemon (10.0.0.1:59608). Apr 14 01:10:30.706976 systemd[1]: sshd@3-10.0.0.8:22-10.0.0.1:59598.service: Deactivated successfully. Apr 14 01:10:30.708983 systemd-logind[1568]: Session 4 logged out. Waiting for processes to exit. Apr 14 01:10:30.709253 systemd[1]: session-4.scope: Deactivated successfully. Apr 14 01:10:30.710905 systemd-logind[1568]: Removed session 4. Apr 14 01:10:30.735120 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 59608 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:10:30.736325 sshd[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:10:30.742220 systemd-logind[1568]: New session 5 of user core. Apr 14 01:10:30.748588 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 14 01:10:30.816246 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 14 01:10:30.816514 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 01:10:30.842183 sudo[1725]: pam_unix(sudo:session): session closed for user root Apr 14 01:10:30.845270 sshd[1718]: pam_unix(sshd:session): session closed for user core Apr 14 01:10:30.852878 systemd[1]: Started sshd@5-10.0.0.8:22-10.0.0.1:59612.service - OpenSSH per-connection server daemon (10.0.0.1:59612). Apr 14 01:10:30.853242 systemd[1]: sshd@4-10.0.0.8:22-10.0.0.1:59608.service: Deactivated successfully. Apr 14 01:10:30.855030 systemd-logind[1568]: Session 5 logged out. Waiting for processes to exit. Apr 14 01:10:30.855510 systemd[1]: session-5.scope: Deactivated successfully. Apr 14 01:10:30.856538 systemd-logind[1568]: Removed session 5. Apr 14 01:10:30.885239 sshd[1727]: Accepted publickey for core from 10.0.0.1 port 59612 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:10:30.887018 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:10:30.893287 systemd-logind[1568]: New session 6 of user core. Apr 14 01:10:30.902874 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 14 01:10:30.961506 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 14 01:10:30.961854 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 01:10:30.966238 sudo[1735]: pam_unix(sudo:session): session closed for user root Apr 14 01:10:30.972406 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 14 01:10:30.972639 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 01:10:30.989746 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 14 01:10:30.991801 auditctl[1738]: No rules Apr 14 01:10:30.993718 systemd[1]: audit-rules.service: Deactivated successfully. Apr 14 01:10:30.993960 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 14 01:10:30.996438 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 14 01:10:31.042296 augenrules[1757]: No rules Apr 14 01:10:31.044889 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 14 01:10:31.046591 sudo[1734]: pam_unix(sudo:session): session closed for user root Apr 14 01:10:31.050623 sshd[1727]: pam_unix(sshd:session): session closed for user core Apr 14 01:10:31.073028 systemd[1]: Started sshd@6-10.0.0.8:22-10.0.0.1:59616.service - OpenSSH per-connection server daemon (10.0.0.1:59616). Apr 14 01:10:31.073568 systemd[1]: sshd@5-10.0.0.8:22-10.0.0.1:59612.service: Deactivated successfully. Apr 14 01:10:31.077169 systemd-logind[1568]: Session 6 logged out. Waiting for processes to exit. Apr 14 01:10:31.077904 systemd[1]: session-6.scope: Deactivated successfully. Apr 14 01:10:31.079100 systemd-logind[1568]: Removed session 6. Apr 14 01:10:31.112448 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 59616 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:10:31.115138 sshd[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:10:31.123229 systemd-logind[1568]: New session 7 of user core. Apr 14 01:10:31.144078 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 14 01:10:31.201626 sudo[1770]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 14 01:10:31.201954 sudo[1770]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 14 01:10:31.559312 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 14 01:10:31.559387 (dockerd)[1788]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 14 01:10:31.918738 dockerd[1788]: time="2026-04-14T01:10:31.918462777Z" level=info msg="Starting up" Apr 14 01:10:32.206003 dockerd[1788]: time="2026-04-14T01:10:32.205677372Z" level=info msg="Loading containers: start." Apr 14 01:10:32.343396 kernel: Initializing XFRM netlink socket Apr 14 01:10:32.444812 systemd-networkd[1244]: docker0: Link UP Apr 14 01:10:32.471179 dockerd[1788]: time="2026-04-14T01:10:32.471040729Z" level=info msg="Loading containers: done." Apr 14 01:10:32.489224 dockerd[1788]: time="2026-04-14T01:10:32.489134332Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 14 01:10:32.489469 dockerd[1788]: time="2026-04-14T01:10:32.489301475Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 14 01:10:32.489599 dockerd[1788]: time="2026-04-14T01:10:32.489554609Z" level=info msg="Daemon has completed initialization" Apr 14 01:10:32.543665 dockerd[1788]: time="2026-04-14T01:10:32.543381006Z" level=info msg="API listen on /run/docker.sock" Apr 14 01:10:32.544295 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 14 01:10:33.146043 containerd[1584]: time="2026-04-14T01:10:33.145655810Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 14 01:10:33.997569 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1611211611.mount: Deactivated successfully. Apr 14 01:10:34.932142 containerd[1584]: time="2026-04-14T01:10:34.931869213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:34.933076 containerd[1584]: time="2026-04-14T01:10:34.932832380Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=29988857" Apr 14 01:10:34.933978 containerd[1584]: time="2026-04-14T01:10:34.933884888Z" level=info msg="ImageCreate event name:\"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:34.940171 containerd[1584]: time="2026-04-14T01:10:34.939963831Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:34.941139 containerd[1584]: time="2026-04-14T01:10:34.941084545Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"29986018\" in 1.79527239s" Apr 14 01:10:34.941139 containerd[1584]: time="2026-04-14T01:10:34.941131408Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\"" Apr 14 01:10:34.942400 containerd[1584]: time="2026-04-14T01:10:34.942178924Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 14 01:10:35.835649 containerd[1584]: time="2026-04-14T01:10:35.834919159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:35.837008 containerd[1584]: time="2026-04-14T01:10:35.836766181Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=26021841" Apr 14 01:10:35.838013 containerd[1584]: time="2026-04-14T01:10:35.837952344Z" level=info msg="ImageCreate event name:\"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:35.844967 containerd[1584]: time="2026-04-14T01:10:35.844717204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:35.846198 containerd[1584]: time="2026-04-14T01:10:35.846145007Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"27552094\" in 903.741472ms" Apr 14 01:10:35.846198 containerd[1584]: time="2026-04-14T01:10:35.846189298Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\"" Apr 14 01:10:35.847119 containerd[1584]: time="2026-04-14T01:10:35.847059053Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 14 01:10:36.445193 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 14 01:10:36.453579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:10:36.565571 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:10:36.568783 (kubelet)[2016]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 14 01:10:36.628053 kubelet[2016]: E0414 01:10:36.627935 2016 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 14 01:10:36.632276 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 14 01:10:36.633074 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 14 01:10:36.779240 containerd[1584]: time="2026-04-14T01:10:36.778845394Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:36.779949 containerd[1584]: time="2026-04-14T01:10:36.779882824Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=20162685" Apr 14 01:10:36.780896 containerd[1584]: time="2026-04-14T01:10:36.780820671Z" level=info msg="ImageCreate event name:\"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:36.783674 containerd[1584]: time="2026-04-14T01:10:36.783603237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:36.784617 containerd[1584]: time="2026-04-14T01:10:36.784579731Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"21692956\" in 937.452521ms" Apr 14 01:10:36.784617 containerd[1584]: time="2026-04-14T01:10:36.784617309Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\"" Apr 14 01:10:36.785534 containerd[1584]: time="2026-04-14T01:10:36.785309049Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 14 01:10:37.579500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount504102251.mount: Deactivated successfully. Apr 14 01:10:38.000299 containerd[1584]: time="2026-04-14T01:10:38.000101958Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:38.001441 containerd[1584]: time="2026-04-14T01:10:38.001359589Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=31828657" Apr 14 01:10:38.002638 containerd[1584]: time="2026-04-14T01:10:38.002587856Z" level=info msg="ImageCreate event name:\"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:38.004785 containerd[1584]: time="2026-04-14T01:10:38.004734285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:38.005586 containerd[1584]: time="2026-04-14T01:10:38.005298096Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"31827782\" in 1.219830355s" Apr 14 01:10:38.005586 containerd[1584]: time="2026-04-14T01:10:38.005552631Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\"" Apr 14 01:10:38.006346 containerd[1584]: time="2026-04-14T01:10:38.006254957Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 14 01:10:38.473732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3631849681.mount: Deactivated successfully. Apr 14 01:10:39.180903 containerd[1584]: time="2026-04-14T01:10:39.180750871Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:39.181449 containerd[1584]: time="2026-04-14T01:10:39.181418831Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20941714" Apr 14 01:10:39.182519 containerd[1584]: time="2026-04-14T01:10:39.182475381Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:39.184967 containerd[1584]: time="2026-04-14T01:10:39.184936015Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:39.185780 containerd[1584]: time="2026-04-14T01:10:39.185737033Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.179412913s" Apr 14 01:10:39.185780 containerd[1584]: time="2026-04-14T01:10:39.185773799Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 14 01:10:39.186406 containerd[1584]: time="2026-04-14T01:10:39.186377919Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 14 01:10:39.542780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3938394549.mount: Deactivated successfully. Apr 14 01:10:39.548352 containerd[1584]: time="2026-04-14T01:10:39.548248917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:39.548867 containerd[1584]: time="2026-04-14T01:10:39.548780234Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321070" Apr 14 01:10:39.551019 containerd[1584]: time="2026-04-14T01:10:39.550715163Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:39.553096 containerd[1584]: time="2026-04-14T01:10:39.552994302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:39.553739 containerd[1584]: time="2026-04-14T01:10:39.553599192Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 367.192351ms" Apr 14 01:10:39.554047 containerd[1584]: time="2026-04-14T01:10:39.553873990Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 14 01:10:39.554631 containerd[1584]: time="2026-04-14T01:10:39.554500832Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 14 01:10:40.041159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2992572178.mount: Deactivated successfully. Apr 14 01:10:40.786700 containerd[1584]: time="2026-04-14T01:10:40.786581455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:40.787471 containerd[1584]: time="2026-04-14T01:10:40.787389081Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718278" Apr 14 01:10:40.788669 containerd[1584]: time="2026-04-14T01:10:40.788627352Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:40.793407 containerd[1584]: time="2026-04-14T01:10:40.793253241Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.238728069s" Apr 14 01:10:40.793407 containerd[1584]: time="2026-04-14T01:10:40.793401072Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 14 01:10:40.793917 containerd[1584]: time="2026-04-14T01:10:40.793845247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:42.944750 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:10:42.957173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:10:42.983413 systemd[1]: Reloading requested from client PID 2185 ('systemctl') (unit session-7.scope)... Apr 14 01:10:42.983446 systemd[1]: Reloading... Apr 14 01:10:43.047379 zram_generator::config[2224]: No configuration found. Apr 14 01:10:43.130178 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 01:10:43.178866 systemd[1]: Reloading finished in 194 ms. Apr 14 01:10:43.231687 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 14 01:10:43.231773 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 14 01:10:43.232107 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:10:43.233887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:10:43.361268 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:10:43.365470 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 01:10:43.407448 kubelet[2284]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 01:10:43.407448 kubelet[2284]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 01:10:43.407448 kubelet[2284]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 01:10:43.407978 kubelet[2284]: I0414 01:10:43.407481 2284 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 01:10:44.001830 kubelet[2284]: I0414 01:10:44.001770 2284 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 14 01:10:44.001830 kubelet[2284]: I0414 01:10:44.001814 2284 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 01:10:44.002079 kubelet[2284]: I0414 01:10:44.002046 2284 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 01:10:44.022047 kubelet[2284]: E0414 01:10:44.021958 2284 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.8:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 14 01:10:44.023861 kubelet[2284]: I0414 01:10:44.023829 2284 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 01:10:44.031232 kubelet[2284]: E0414 01:10:44.031150 2284 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 01:10:44.031232 kubelet[2284]: I0414 01:10:44.031200 2284 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 14 01:10:44.035291 kubelet[2284]: I0414 01:10:44.035244 2284 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 14 01:10:44.035690 kubelet[2284]: I0414 01:10:44.035646 2284 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 01:10:44.035840 kubelet[2284]: I0414 01:10:44.035683 2284 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 14 01:10:44.035934 kubelet[2284]: I0414 01:10:44.035841 2284 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 01:10:44.035934 kubelet[2284]: I0414 01:10:44.035855 2284 container_manager_linux.go:303] "Creating device plugin manager" Apr 14 01:10:44.035988 kubelet[2284]: I0414 01:10:44.035966 2284 state_mem.go:36] "Initialized new in-memory state store" Apr 14 01:10:44.042494 kubelet[2284]: I0414 01:10:44.041888 2284 kubelet.go:480] "Attempting to sync node with API server" Apr 14 01:10:44.042578 kubelet[2284]: I0414 01:10:44.042502 2284 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 01:10:44.042578 kubelet[2284]: I0414 01:10:44.042536 2284 kubelet.go:386] "Adding apiserver pod source" Apr 14 01:10:44.044488 kubelet[2284]: I0414 01:10:44.044468 2284 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 01:10:44.045707 kubelet[2284]: E0414 01:10:44.045625 2284 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 01:10:44.046192 kubelet[2284]: E0414 01:10:44.046102 2284 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.8:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 14 01:10:44.047365 kubelet[2284]: I0414 01:10:44.047279 2284 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 01:10:44.047768 kubelet[2284]: I0414 01:10:44.047729 2284 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 01:10:44.048389 kubelet[2284]: W0414 01:10:44.048370 2284 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 14 01:10:44.051886 kubelet[2284]: I0414 01:10:44.051851 2284 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 14 01:10:44.051933 kubelet[2284]: I0414 01:10:44.051902 2284 server.go:1289] "Started kubelet" Apr 14 01:10:44.052065 kubelet[2284]: I0414 01:10:44.051971 2284 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 01:10:44.052732 kubelet[2284]: I0414 01:10:44.051996 2284 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 01:10:44.052732 kubelet[2284]: I0414 01:10:44.052503 2284 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 01:10:44.053197 kubelet[2284]: I0414 01:10:44.053139 2284 server.go:317] "Adding debug handlers to kubelet server" Apr 14 01:10:44.055781 kubelet[2284]: I0414 01:10:44.054625 2284 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 01:10:44.055781 kubelet[2284]: I0414 01:10:44.054757 2284 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 01:10:44.055781 kubelet[2284]: E0414 01:10:44.054282 2284 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.8:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.8:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18a613f489f74ea0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2026-04-14 01:10:44.05187344 +0000 UTC m=+0.682232748,LastTimestamp:2026-04-14 01:10:44.05187344 +0000 UTC m=+0.682232748,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Apr 14 01:10:44.056369 kubelet[2284]: E0414 01:10:44.056246 2284 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Apr 14 01:10:44.056369 kubelet[2284]: I0414 01:10:44.056313 2284 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 14 01:10:44.056458 kubelet[2284]: I0414 01:10:44.056421 2284 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 14 01:10:44.056486 kubelet[2284]: I0414 01:10:44.056461 2284 reconciler.go:26] "Reconciler: start to sync state" Apr 14 01:10:44.056930 kubelet[2284]: E0414 01:10:44.056770 2284 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.8:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 14 01:10:44.056930 kubelet[2284]: E0414 01:10:44.056806 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="200ms" Apr 14 01:10:44.057757 kubelet[2284]: I0414 01:10:44.057253 2284 factory.go:223] Registration of the systemd container factory successfully Apr 14 01:10:44.057757 kubelet[2284]: I0414 01:10:44.057390 2284 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 01:10:44.058432 kubelet[2284]: E0414 01:10:44.058398 2284 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 01:10:44.058738 kubelet[2284]: I0414 01:10:44.058710 2284 factory.go:223] Registration of the containerd container factory successfully Apr 14 01:10:44.079503 kubelet[2284]: I0414 01:10:44.079428 2284 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 14 01:10:44.081372 kubelet[2284]: I0414 01:10:44.081260 2284 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 14 01:10:44.081426 kubelet[2284]: I0414 01:10:44.081376 2284 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 14 01:10:44.081426 kubelet[2284]: I0414 01:10:44.081398 2284 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 01:10:44.081426 kubelet[2284]: I0414 01:10:44.081406 2284 kubelet.go:2436] "Starting kubelet main sync loop" Apr 14 01:10:44.081506 kubelet[2284]: E0414 01:10:44.081459 2284 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 01:10:44.083425 kubelet[2284]: E0414 01:10:44.083298 2284 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 01:10:44.084298 kubelet[2284]: I0414 01:10:44.083897 2284 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 01:10:44.084298 kubelet[2284]: I0414 01:10:44.083910 2284 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 01:10:44.084298 kubelet[2284]: I0414 01:10:44.083924 2284 state_mem.go:36] "Initialized new in-memory state store" Apr 14 01:10:44.124413 kubelet[2284]: I0414 01:10:44.124247 2284 policy_none.go:49] "None policy: Start" Apr 14 01:10:44.124413 kubelet[2284]: I0414 01:10:44.124306 2284 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 14 01:10:44.124413 kubelet[2284]: I0414 01:10:44.124373 2284 state_mem.go:35] "Initializing new in-memory state store" Apr 14 01:10:44.130907 kubelet[2284]: E0414 01:10:44.130838 2284 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 01:10:44.131093 kubelet[2284]: I0414 01:10:44.131030 2284 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 01:10:44.131093 kubelet[2284]: I0414 01:10:44.131040 2284 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 01:10:44.131282 kubelet[2284]: I0414 01:10:44.131245 2284 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 01:10:44.132537 kubelet[2284]: E0414 01:10:44.132506 2284 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 01:10:44.132594 kubelet[2284]: E0414 01:10:44.132562 2284 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Apr 14 01:10:44.197844 kubelet[2284]: E0414 01:10:44.197722 2284 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:10:44.200584 kubelet[2284]: E0414 01:10:44.200456 2284 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:10:44.210633 kubelet[2284]: E0414 01:10:44.210523 2284 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:10:44.242582 kubelet[2284]: I0414 01:10:44.241967 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 01:10:44.243823 kubelet[2284]: E0414 01:10:44.243758 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Apr 14 01:10:44.263682 kubelet[2284]: E0414 01:10:44.263429 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="400ms" Apr 14 01:10:44.361694 kubelet[2284]: I0414 01:10:44.361182 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30c884d313d9e1c318a62da468c4549a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"30c884d313d9e1c318a62da468c4549a\") " pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:44.361694 kubelet[2284]: I0414 01:10:44.361654 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:44.361694 kubelet[2284]: I0414 01:10:44.361752 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:44.362064 kubelet[2284]: I0414 01:10:44.361803 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:44.362064 kubelet[2284]: I0414 01:10:44.361832 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 14 01:10:44.362064 kubelet[2284]: I0414 01:10:44.361849 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:44.362064 kubelet[2284]: I0414 01:10:44.361867 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:44.362064 kubelet[2284]: I0414 01:10:44.361909 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30c884d313d9e1c318a62da468c4549a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"30c884d313d9e1c318a62da468c4549a\") " pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:44.362158 kubelet[2284]: I0414 01:10:44.361924 2284 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30c884d313d9e1c318a62da468c4549a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"30c884d313d9e1c318a62da468c4549a\") " pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:44.446689 kubelet[2284]: I0414 01:10:44.446594 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 01:10:44.447366 kubelet[2284]: E0414 01:10:44.447268 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Apr 14 01:10:44.498741 kubelet[2284]: E0414 01:10:44.498663 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:44.499840 containerd[1584]: time="2026-04-14T01:10:44.499797891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,}" Apr 14 01:10:44.501402 kubelet[2284]: E0414 01:10:44.501304 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:44.502160 containerd[1584]: time="2026-04-14T01:10:44.501928578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:30c884d313d9e1c318a62da468c4549a,Namespace:kube-system,Attempt:0,}" Apr 14 01:10:44.511991 kubelet[2284]: E0414 01:10:44.511949 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:44.512759 containerd[1584]: time="2026-04-14T01:10:44.512631436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,}" Apr 14 01:10:44.665274 kubelet[2284]: E0414 01:10:44.665000 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.8:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.8:6443: connect: connection refused" interval="800ms" Apr 14 01:10:44.850767 kubelet[2284]: I0414 01:10:44.850706 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 01:10:44.851113 kubelet[2284]: E0414 01:10:44.851052 2284 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.8:6443/api/v1/nodes\": dial tcp 10.0.0.8:6443: connect: connection refused" node="localhost" Apr 14 01:10:44.870811 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1338993818.mount: Deactivated successfully. Apr 14 01:10:44.881687 containerd[1584]: time="2026-04-14T01:10:44.881464400Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 01:10:44.882768 containerd[1584]: time="2026-04-14T01:10:44.882700773Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 01:10:44.883412 containerd[1584]: time="2026-04-14T01:10:44.883305195Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=311988" Apr 14 01:10:44.884112 containerd[1584]: time="2026-04-14T01:10:44.884034034Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 01:10:44.885056 containerd[1584]: time="2026-04-14T01:10:44.884990387Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 01:10:44.886060 containerd[1584]: time="2026-04-14T01:10:44.886002295Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 01:10:44.886466 containerd[1584]: time="2026-04-14T01:10:44.886404951Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 14 01:10:44.888494 containerd[1584]: time="2026-04-14T01:10:44.888452930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 14 01:10:44.889910 containerd[1584]: time="2026-04-14T01:10:44.889738952Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 376.967871ms" Apr 14 01:10:44.892264 containerd[1584]: time="2026-04-14T01:10:44.892172419Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 390.178212ms" Apr 14 01:10:44.895282 containerd[1584]: time="2026-04-14T01:10:44.895167147Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 395.286323ms" Apr 14 01:10:45.014727 containerd[1584]: time="2026-04-14T01:10:45.014347557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:10:45.014915 containerd[1584]: time="2026-04-14T01:10:45.014795070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:10:45.015236 containerd[1584]: time="2026-04-14T01:10:45.015050761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:45.015236 containerd[1584]: time="2026-04-14T01:10:45.014652337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:10:45.015236 containerd[1584]: time="2026-04-14T01:10:45.015096660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:10:45.015236 containerd[1584]: time="2026-04-14T01:10:45.015105576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:45.015236 containerd[1584]: time="2026-04-14T01:10:45.015194497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:45.015488 containerd[1584]: time="2026-04-14T01:10:45.015404181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:45.017275 containerd[1584]: time="2026-04-14T01:10:45.017059240Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:10:45.017275 containerd[1584]: time="2026-04-14T01:10:45.017089277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:10:45.017275 containerd[1584]: time="2026-04-14T01:10:45.017097004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:45.017275 containerd[1584]: time="2026-04-14T01:10:45.017145081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:45.071225 containerd[1584]: time="2026-04-14T01:10:45.071160063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:30c884d313d9e1c318a62da468c4549a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a492b61ea79d4010ef53c453da873db808722917701f3b5eba35c7d9c4762a4\"" Apr 14 01:10:45.073384 kubelet[2284]: E0414 01:10:45.073354 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:45.079643 containerd[1584]: time="2026-04-14T01:10:45.078991463Z" level=info msg="CreateContainer within sandbox \"6a492b61ea79d4010ef53c453da873db808722917701f3b5eba35c7d9c4762a4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 14 01:10:45.083111 containerd[1584]: time="2026-04-14T01:10:45.083044435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ebf8e820819e4b80bc03d078b9ba80f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e4078a3269f891600da37671e5ae4fa7ab6264355efaaab176523cb79ed804c\"" Apr 14 01:10:45.083670 kubelet[2284]: E0414 01:10:45.083636 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:45.088229 containerd[1584]: time="2026-04-14T01:10:45.088171616Z" level=info msg="CreateContainer within sandbox \"7e4078a3269f891600da37671e5ae4fa7ab6264355efaaab176523cb79ed804c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 14 01:10:45.089153 containerd[1584]: time="2026-04-14T01:10:45.089100226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:39798d73a6894e44ae801eb773bf9a39,Namespace:kube-system,Attempt:0,} returns sandbox id \"adf0bb78ea2f34eb4b97c161b2c2834709febbe79a21626b91ea478e6ea5424f\"" Apr 14 01:10:45.089811 kubelet[2284]: E0414 01:10:45.089691 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:45.093808 containerd[1584]: time="2026-04-14T01:10:45.093638423Z" level=info msg="CreateContainer within sandbox \"adf0bb78ea2f34eb4b97c161b2c2834709febbe79a21626b91ea478e6ea5424f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 14 01:10:45.095145 kubelet[2284]: E0414 01:10:45.095114 2284 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.8:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 14 01:10:45.097996 containerd[1584]: time="2026-04-14T01:10:45.097882306Z" level=info msg="CreateContainer within sandbox \"6a492b61ea79d4010ef53c453da873db808722917701f3b5eba35c7d9c4762a4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f97d0fdfe0e4b3649054c3bdabcabbfaaf4df88781905487a96766ebb9ed8192\"" Apr 14 01:10:45.099360 containerd[1584]: time="2026-04-14T01:10:45.099217262Z" level=info msg="StartContainer for \"f97d0fdfe0e4b3649054c3bdabcabbfaaf4df88781905487a96766ebb9ed8192\"" Apr 14 01:10:45.108523 containerd[1584]: time="2026-04-14T01:10:45.108428841Z" level=info msg="CreateContainer within sandbox \"7e4078a3269f891600da37671e5ae4fa7ab6264355efaaab176523cb79ed804c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a28780025085de89e87e36e6b96d5b4a16e8499d47616db4ae13c370e8fce05f\"" Apr 14 01:10:45.109021 containerd[1584]: time="2026-04-14T01:10:45.109002724Z" level=info msg="StartContainer for \"a28780025085de89e87e36e6b96d5b4a16e8499d47616db4ae13c370e8fce05f\"" Apr 14 01:10:45.112573 containerd[1584]: time="2026-04-14T01:10:45.112538364Z" level=info msg="CreateContainer within sandbox \"adf0bb78ea2f34eb4b97c161b2c2834709febbe79a21626b91ea478e6ea5424f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ac0ca96093be572045461a5a8b863c8e351b406fa755d65792e25769002ffae9\"" Apr 14 01:10:45.113830 containerd[1584]: time="2026-04-14T01:10:45.113062755Z" level=info msg="StartContainer for \"ac0ca96093be572045461a5a8b863c8e351b406fa755d65792e25769002ffae9\"" Apr 14 01:10:45.164873 kubelet[2284]: E0414 01:10:45.164839 2284 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.8:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.8:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 14 01:10:45.168738 containerd[1584]: time="2026-04-14T01:10:45.168623801Z" level=info msg="StartContainer for \"f97d0fdfe0e4b3649054c3bdabcabbfaaf4df88781905487a96766ebb9ed8192\" returns successfully" Apr 14 01:10:45.191000 containerd[1584]: time="2026-04-14T01:10:45.190874967Z" level=info msg="StartContainer for \"a28780025085de89e87e36e6b96d5b4a16e8499d47616db4ae13c370e8fce05f\" returns successfully" Apr 14 01:10:45.201044 containerd[1584]: time="2026-04-14T01:10:45.200903939Z" level=info msg="StartContainer for \"ac0ca96093be572045461a5a8b863c8e351b406fa755d65792e25769002ffae9\" returns successfully" Apr 14 01:10:45.654157 kubelet[2284]: I0414 01:10:45.654086 2284 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 01:10:46.096973 kubelet[2284]: E0414 01:10:46.096911 2284 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:10:46.097167 kubelet[2284]: E0414 01:10:46.097104 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:46.100427 kubelet[2284]: E0414 01:10:46.100392 2284 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:10:46.100532 kubelet[2284]: E0414 01:10:46.100506 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:46.101777 kubelet[2284]: E0414 01:10:46.101744 2284 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Apr 14 01:10:46.101869 kubelet[2284]: E0414 01:10:46.101843 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:46.500275 kubelet[2284]: E0414 01:10:46.500202 2284 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Apr 14 01:10:46.591654 kubelet[2284]: I0414 01:10:46.591544 2284 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 01:10:46.659361 kubelet[2284]: I0414 01:10:46.656964 2284 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 01:10:46.666692 kubelet[2284]: E0414 01:10:46.666625 2284 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 14 01:10:46.666692 kubelet[2284]: I0414 01:10:46.666670 2284 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:46.668226 kubelet[2284]: E0414 01:10:46.668185 2284 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:46.668226 kubelet[2284]: I0414 01:10:46.668225 2284 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:46.670126 kubelet[2284]: E0414 01:10:46.670054 2284 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:47.047073 kubelet[2284]: I0414 01:10:47.046890 2284 apiserver.go:52] "Watching apiserver" Apr 14 01:10:47.056664 kubelet[2284]: I0414 01:10:47.056574 2284 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 14 01:10:47.103663 kubelet[2284]: I0414 01:10:47.103573 2284 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:47.103808 kubelet[2284]: I0414 01:10:47.103706 2284 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:47.103808 kubelet[2284]: I0414 01:10:47.103577 2284 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 01:10:47.105512 kubelet[2284]: E0414 01:10:47.105487 2284 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:47.105631 kubelet[2284]: E0414 01:10:47.105602 2284 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Apr 14 01:10:47.105727 kubelet[2284]: E0414 01:10:47.105697 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:47.105769 kubelet[2284]: E0414 01:10:47.105699 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:47.105769 kubelet[2284]: E0414 01:10:47.105741 2284 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:47.105904 kubelet[2284]: E0414 01:10:47.105854 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:48.106961 kubelet[2284]: I0414 01:10:48.106914 2284 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:48.116685 kubelet[2284]: E0414 01:10:48.116648 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:49.108373 kubelet[2284]: E0414 01:10:49.108284 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:49.319509 systemd[1]: Reloading requested from client PID 2572 ('systemctl') (unit session-7.scope)... Apr 14 01:10:49.319542 systemd[1]: Reloading... Apr 14 01:10:49.383699 zram_generator::config[2614]: No configuration found. Apr 14 01:10:49.494868 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 14 01:10:49.535815 kubelet[2284]: I0414 01:10:49.535732 2284 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:49.544919 kubelet[2284]: E0414 01:10:49.544712 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:49.554934 systemd[1]: Reloading finished in 235 ms. Apr 14 01:10:49.587790 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:10:49.610677 systemd[1]: kubelet.service: Deactivated successfully. Apr 14 01:10:49.610957 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:10:49.621590 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 14 01:10:49.746216 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 14 01:10:49.749848 (kubelet)[2666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 14 01:10:49.807153 kubelet[2666]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 01:10:49.807153 kubelet[2666]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 14 01:10:49.807153 kubelet[2666]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 14 01:10:49.807635 kubelet[2666]: I0414 01:10:49.807225 2666 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 14 01:10:49.814167 kubelet[2666]: I0414 01:10:49.814118 2666 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 14 01:10:49.814258 kubelet[2666]: I0414 01:10:49.814188 2666 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 14 01:10:49.814784 kubelet[2666]: I0414 01:10:49.814734 2666 server.go:956] "Client rotation is on, will bootstrap in background" Apr 14 01:10:49.818883 kubelet[2666]: I0414 01:10:49.818759 2666 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 14 01:10:49.820929 kubelet[2666]: I0414 01:10:49.820865 2666 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 14 01:10:49.823658 kubelet[2666]: E0414 01:10:49.823604 2666 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 14 01:10:49.823658 kubelet[2666]: I0414 01:10:49.823631 2666 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 14 01:10:49.829585 kubelet[2666]: I0414 01:10:49.829527 2666 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 14 01:10:49.830081 kubelet[2666]: I0414 01:10:49.829990 2666 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 14 01:10:49.830156 kubelet[2666]: I0414 01:10:49.830029 2666 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 14 01:10:49.830156 kubelet[2666]: I0414 01:10:49.830152 2666 topology_manager.go:138] "Creating topology manager with none policy" Apr 14 01:10:49.830156 kubelet[2666]: I0414 01:10:49.830158 2666 container_manager_linux.go:303] "Creating device plugin manager" Apr 14 01:10:49.830385 kubelet[2666]: I0414 01:10:49.830198 2666 state_mem.go:36] "Initialized new in-memory state store" Apr 14 01:10:49.830414 kubelet[2666]: I0414 01:10:49.830392 2666 kubelet.go:480] "Attempting to sync node with API server" Apr 14 01:10:49.830414 kubelet[2666]: I0414 01:10:49.830401 2666 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 14 01:10:49.830473 kubelet[2666]: I0414 01:10:49.830418 2666 kubelet.go:386] "Adding apiserver pod source" Apr 14 01:10:49.830473 kubelet[2666]: I0414 01:10:49.830430 2666 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 14 01:10:49.834747 kubelet[2666]: I0414 01:10:49.833660 2666 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 14 01:10:49.835396 kubelet[2666]: I0414 01:10:49.835376 2666 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 14 01:10:49.844794 kubelet[2666]: I0414 01:10:49.844735 2666 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 14 01:10:49.844794 kubelet[2666]: I0414 01:10:49.844769 2666 server.go:1289] "Started kubelet" Apr 14 01:10:49.846388 kubelet[2666]: I0414 01:10:49.845495 2666 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 14 01:10:49.846388 kubelet[2666]: I0414 01:10:49.845803 2666 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 14 01:10:49.846388 kubelet[2666]: I0414 01:10:49.845835 2666 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 14 01:10:49.847033 kubelet[2666]: I0414 01:10:49.846790 2666 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 14 01:10:49.847033 kubelet[2666]: I0414 01:10:49.846819 2666 server.go:317] "Adding debug handlers to kubelet server" Apr 14 01:10:49.848270 kubelet[2666]: I0414 01:10:49.848169 2666 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 14 01:10:49.851216 kubelet[2666]: E0414 01:10:49.851192 2666 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 14 01:10:49.852077 kubelet[2666]: I0414 01:10:49.852030 2666 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 14 01:10:49.852458 kubelet[2666]: I0414 01:10:49.852195 2666 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 14 01:10:49.853369 kubelet[2666]: I0414 01:10:49.853272 2666 factory.go:223] Registration of the systemd container factory successfully Apr 14 01:10:49.853596 kubelet[2666]: I0414 01:10:49.853505 2666 reconciler.go:26] "Reconciler: start to sync state" Apr 14 01:10:49.853596 kubelet[2666]: I0414 01:10:49.853528 2666 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 14 01:10:49.855160 kubelet[2666]: I0414 01:10:49.854980 2666 factory.go:223] Registration of the containerd container factory successfully Apr 14 01:10:49.864924 kubelet[2666]: I0414 01:10:49.864889 2666 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 14 01:10:49.866716 kubelet[2666]: I0414 01:10:49.866664 2666 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 14 01:10:49.867058 kubelet[2666]: I0414 01:10:49.866806 2666 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 14 01:10:49.867058 kubelet[2666]: I0414 01:10:49.866832 2666 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 14 01:10:49.867058 kubelet[2666]: I0414 01:10:49.866838 2666 kubelet.go:2436] "Starting kubelet main sync loop" Apr 14 01:10:49.867058 kubelet[2666]: E0414 01:10:49.866871 2666 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 14 01:10:49.908654 kubelet[2666]: I0414 01:10:49.908589 2666 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 14 01:10:49.908654 kubelet[2666]: I0414 01:10:49.908620 2666 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 14 01:10:49.908654 kubelet[2666]: I0414 01:10:49.908636 2666 state_mem.go:36] "Initialized new in-memory state store" Apr 14 01:10:49.909107 kubelet[2666]: I0414 01:10:49.909085 2666 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 14 01:10:49.909181 kubelet[2666]: I0414 01:10:49.909109 2666 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 14 01:10:49.909181 kubelet[2666]: I0414 01:10:49.909124 2666 policy_none.go:49] "None policy: Start" Apr 14 01:10:49.909181 kubelet[2666]: I0414 01:10:49.909132 2666 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 14 01:10:49.909181 kubelet[2666]: I0414 01:10:49.909140 2666 state_mem.go:35] "Initializing new in-memory state store" Apr 14 01:10:49.909285 kubelet[2666]: I0414 01:10:49.909213 2666 state_mem.go:75] "Updated machine memory state" Apr 14 01:10:49.911263 kubelet[2666]: E0414 01:10:49.911013 2666 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 14 01:10:49.911997 kubelet[2666]: I0414 01:10:49.911952 2666 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 14 01:10:49.912059 kubelet[2666]: I0414 01:10:49.912022 2666 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 14 01:10:49.913266 kubelet[2666]: I0414 01:10:49.913075 2666 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 14 01:10:49.914880 kubelet[2666]: E0414 01:10:49.914654 2666 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 14 01:10:49.968842 kubelet[2666]: I0414 01:10:49.968369 2666 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 01:10:49.968842 kubelet[2666]: I0414 01:10:49.968438 2666 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:49.968842 kubelet[2666]: I0414 01:10:49.968392 2666 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:49.978984 kubelet[2666]: E0414 01:10:49.978535 2666 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:49.978984 kubelet[2666]: E0414 01:10:49.978859 2666 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:50.026994 kubelet[2666]: I0414 01:10:50.026639 2666 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Apr 14 01:10:50.039731 kubelet[2666]: I0414 01:10:50.039567 2666 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Apr 14 01:10:50.039995 kubelet[2666]: I0414 01:10:50.039979 2666 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Apr 14 01:10:50.055774 kubelet[2666]: I0414 01:10:50.055632 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:50.055774 kubelet[2666]: I0414 01:10:50.055778 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:50.056140 kubelet[2666]: I0414 01:10:50.055812 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:50.056140 kubelet[2666]: I0414 01:10:50.055839 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:50.056140 kubelet[2666]: I0414 01:10:50.055868 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/30c884d313d9e1c318a62da468c4549a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"30c884d313d9e1c318a62da468c4549a\") " pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:50.056140 kubelet[2666]: I0414 01:10:50.055883 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/30c884d313d9e1c318a62da468c4549a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"30c884d313d9e1c318a62da468c4549a\") " pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:50.056140 kubelet[2666]: I0414 01:10:50.055896 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ebf8e820819e4b80bc03d078b9ba80f5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ebf8e820819e4b80bc03d078b9ba80f5\") " pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:50.056267 kubelet[2666]: I0414 01:10:50.055992 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/39798d73a6894e44ae801eb773bf9a39-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"39798d73a6894e44ae801eb773bf9a39\") " pod="kube-system/kube-scheduler-localhost" Apr 14 01:10:50.056267 kubelet[2666]: I0414 01:10:50.056077 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/30c884d313d9e1c318a62da468c4549a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"30c884d313d9e1c318a62da468c4549a\") " pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:50.279655 kubelet[2666]: E0414 01:10:50.279184 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:50.279655 kubelet[2666]: E0414 01:10:50.279194 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:50.279655 kubelet[2666]: E0414 01:10:50.279195 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:50.834449 kubelet[2666]: I0414 01:10:50.834275 2666 apiserver.go:52] "Watching apiserver" Apr 14 01:10:50.854105 kubelet[2666]: I0414 01:10:50.853209 2666 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 14 01:10:50.881679 kubelet[2666]: I0414 01:10:50.881372 2666 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Apr 14 01:10:50.881679 kubelet[2666]: I0414 01:10:50.881627 2666 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:50.881847 kubelet[2666]: I0414 01:10:50.881820 2666 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:50.892396 kubelet[2666]: E0414 01:10:50.892219 2666 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Apr 14 01:10:50.892396 kubelet[2666]: E0414 01:10:50.892384 2666 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Apr 14 01:10:50.892623 kubelet[2666]: E0414 01:10:50.892591 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:50.892650 kubelet[2666]: E0414 01:10:50.892624 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:50.894393 kubelet[2666]: E0414 01:10:50.892272 2666 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Apr 14 01:10:50.894393 kubelet[2666]: E0414 01:10:50.892836 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:50.957750 kubelet[2666]: I0414 01:10:50.957615 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.957593916 podStartE2EDuration="2.957593916s" podCreationTimestamp="2026-04-14 01:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:10:50.937141301 +0000 UTC m=+1.182019579" watchObservedRunningTime="2026-04-14 01:10:50.957593916 +0000 UTC m=+1.202472202" Apr 14 01:10:50.967037 kubelet[2666]: I0414 01:10:50.966900 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.966886706 podStartE2EDuration="1.966886706s" podCreationTimestamp="2026-04-14 01:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:10:50.957886613 +0000 UTC m=+1.202764891" watchObservedRunningTime="2026-04-14 01:10:50.966886706 +0000 UTC m=+1.211764993" Apr 14 01:10:50.976492 kubelet[2666]: I0414 01:10:50.976396 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.976382916 podStartE2EDuration="1.976382916s" podCreationTimestamp="2026-04-14 01:10:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:10:50.967142267 +0000 UTC m=+1.212020545" watchObservedRunningTime="2026-04-14 01:10:50.976382916 +0000 UTC m=+1.221261202" Apr 14 01:10:51.885279 kubelet[2666]: E0414 01:10:51.885208 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:51.885279 kubelet[2666]: E0414 01:10:51.885218 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:51.885659 kubelet[2666]: E0414 01:10:51.885519 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:52.887779 kubelet[2666]: E0414 01:10:52.887642 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:55.572374 kubelet[2666]: I0414 01:10:55.572166 2666 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 14 01:10:55.573261 containerd[1584]: time="2026-04-14T01:10:55.573034311Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 14 01:10:55.573646 kubelet[2666]: I0414 01:10:55.573417 2666 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 14 01:10:56.402591 kubelet[2666]: I0414 01:10:56.402457 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/31cd886c-7058-4347-b048-49d2f2455f29-kube-proxy\") pod \"kube-proxy-2p7mr\" (UID: \"31cd886c-7058-4347-b048-49d2f2455f29\") " pod="kube-system/kube-proxy-2p7mr" Apr 14 01:10:56.402591 kubelet[2666]: I0414 01:10:56.402609 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31cd886c-7058-4347-b048-49d2f2455f29-lib-modules\") pod \"kube-proxy-2p7mr\" (UID: \"31cd886c-7058-4347-b048-49d2f2455f29\") " pod="kube-system/kube-proxy-2p7mr" Apr 14 01:10:56.402961 kubelet[2666]: I0414 01:10:56.402648 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31cd886c-7058-4347-b048-49d2f2455f29-xtables-lock\") pod \"kube-proxy-2p7mr\" (UID: \"31cd886c-7058-4347-b048-49d2f2455f29\") " pod="kube-system/kube-proxy-2p7mr" Apr 14 01:10:56.402961 kubelet[2666]: I0414 01:10:56.402664 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xjvq\" (UniqueName: \"kubernetes.io/projected/31cd886c-7058-4347-b048-49d2f2455f29-kube-api-access-5xjvq\") pod \"kube-proxy-2p7mr\" (UID: \"31cd886c-7058-4347-b048-49d2f2455f29\") " pod="kube-system/kube-proxy-2p7mr" Apr 14 01:10:56.510619 kubelet[2666]: E0414 01:10:56.510041 2666 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 14 01:10:56.510619 kubelet[2666]: E0414 01:10:56.510064 2666 projected.go:194] Error preparing data for projected volume kube-api-access-5xjvq for pod kube-system/kube-proxy-2p7mr: configmap "kube-root-ca.crt" not found Apr 14 01:10:56.510619 kubelet[2666]: E0414 01:10:56.510146 2666 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/31cd886c-7058-4347-b048-49d2f2455f29-kube-api-access-5xjvq podName:31cd886c-7058-4347-b048-49d2f2455f29 nodeName:}" failed. No retries permitted until 2026-04-14 01:10:57.01012468 +0000 UTC m=+7.255002974 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5xjvq" (UniqueName: "kubernetes.io/projected/31cd886c-7058-4347-b048-49d2f2455f29-kube-api-access-5xjvq") pod "kube-proxy-2p7mr" (UID: "31cd886c-7058-4347-b048-49d2f2455f29") : configmap "kube-root-ca.crt" not found Apr 14 01:10:56.808482 kubelet[2666]: I0414 01:10:56.808224 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/182e1dc2-b072-43f5-ba9b-46c87653953e-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-z7nz8\" (UID: \"182e1dc2-b072-43f5-ba9b-46c87653953e\") " pod="tigera-operator/tigera-operator-6bf85f8dd-z7nz8" Apr 14 01:10:56.809094 kubelet[2666]: I0414 01:10:56.808523 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phdq4\" (UniqueName: \"kubernetes.io/projected/182e1dc2-b072-43f5-ba9b-46c87653953e-kube-api-access-phdq4\") pod \"tigera-operator-6bf85f8dd-z7nz8\" (UID: \"182e1dc2-b072-43f5-ba9b-46c87653953e\") " pod="tigera-operator/tigera-operator-6bf85f8dd-z7nz8" Apr 14 01:10:57.074277 containerd[1584]: time="2026-04-14T01:10:57.074066257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-z7nz8,Uid:182e1dc2-b072-43f5-ba9b-46c87653953e,Namespace:tigera-operator,Attempt:0,}" Apr 14 01:10:57.104822 containerd[1584]: time="2026-04-14T01:10:57.103505451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:10:57.104984 containerd[1584]: time="2026-04-14T01:10:57.104717028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:10:57.104984 containerd[1584]: time="2026-04-14T01:10:57.104731784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:57.104984 containerd[1584]: time="2026-04-14T01:10:57.104814005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:57.152928 containerd[1584]: time="2026-04-14T01:10:57.152843761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-z7nz8,Uid:182e1dc2-b072-43f5-ba9b-46c87653953e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5a8cdf3ff46b036a039609c69f43fd5814bd0125ca5a42f0750172581f68525e\"" Apr 14 01:10:57.154713 containerd[1584]: time="2026-04-14T01:10:57.154448095Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 14 01:10:57.267050 kubelet[2666]: E0414 01:10:57.266921 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:57.267789 containerd[1584]: time="2026-04-14T01:10:57.267668269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2p7mr,Uid:31cd886c-7058-4347-b048-49d2f2455f29,Namespace:kube-system,Attempt:0,}" Apr 14 01:10:57.288708 containerd[1584]: time="2026-04-14T01:10:57.288610568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:10:57.288708 containerd[1584]: time="2026-04-14T01:10:57.288678243Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:10:57.288708 containerd[1584]: time="2026-04-14T01:10:57.288694013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:57.288892 containerd[1584]: time="2026-04-14T01:10:57.288753064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:10:57.331292 containerd[1584]: time="2026-04-14T01:10:57.331113844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2p7mr,Uid:31cd886c-7058-4347-b048-49d2f2455f29,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad2baf167706ed58ce87ce67a63e3ae363c050fd1bff273cbe32d790905d262d\"" Apr 14 01:10:57.332042 kubelet[2666]: E0414 01:10:57.331987 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:57.339891 containerd[1584]: time="2026-04-14T01:10:57.339756355Z" level=info msg="CreateContainer within sandbox \"ad2baf167706ed58ce87ce67a63e3ae363c050fd1bff273cbe32d790905d262d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 14 01:10:57.355236 containerd[1584]: time="2026-04-14T01:10:57.355086463Z" level=info msg="CreateContainer within sandbox \"ad2baf167706ed58ce87ce67a63e3ae363c050fd1bff273cbe32d790905d262d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fce3e8a4e7158fa4c912f2f6b6c316f53c01fb0fee4defb09548af56dc036e8d\"" Apr 14 01:10:57.355915 containerd[1584]: time="2026-04-14T01:10:57.355849366Z" level=info msg="StartContainer for \"fce3e8a4e7158fa4c912f2f6b6c316f53c01fb0fee4defb09548af56dc036e8d\"" Apr 14 01:10:57.417864 containerd[1584]: time="2026-04-14T01:10:57.417745745Z" level=info msg="StartContainer for \"fce3e8a4e7158fa4c912f2f6b6c316f53c01fb0fee4defb09548af56dc036e8d\" returns successfully" Apr 14 01:10:57.806499 kubelet[2666]: E0414 01:10:57.806407 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:57.901624 kubelet[2666]: E0414 01:10:57.901524 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:57.902466 kubelet[2666]: E0414 01:10:57.902399 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:57.911110 kubelet[2666]: I0414 01:10:57.910626 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2p7mr" podStartSLOduration=1.910559831 podStartE2EDuration="1.910559831s" podCreationTimestamp="2026-04-14 01:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:10:57.91029074 +0000 UTC m=+8.155169025" watchObservedRunningTime="2026-04-14 01:10:57.910559831 +0000 UTC m=+8.155438124" Apr 14 01:10:57.980189 kubelet[2666]: E0414 01:10:57.979632 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:58.505613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2770187430.mount: Deactivated successfully. Apr 14 01:10:58.903693 kubelet[2666]: E0414 01:10:58.903557 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:10:59.161209 containerd[1584]: time="2026-04-14T01:10:59.161031352Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:59.161803 containerd[1584]: time="2026-04-14T01:10:59.161767073Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 14 01:10:59.163303 containerd[1584]: time="2026-04-14T01:10:59.163220531Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:59.165406 containerd[1584]: time="2026-04-14T01:10:59.165370921Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:10:59.166290 containerd[1584]: time="2026-04-14T01:10:59.166133409Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.011659672s" Apr 14 01:10:59.166290 containerd[1584]: time="2026-04-14T01:10:59.166201167Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 14 01:10:59.170831 containerd[1584]: time="2026-04-14T01:10:59.170634755Z" level=info msg="CreateContainer within sandbox \"5a8cdf3ff46b036a039609c69f43fd5814bd0125ca5a42f0750172581f68525e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 14 01:10:59.183561 containerd[1584]: time="2026-04-14T01:10:59.183451494Z" level=info msg="CreateContainer within sandbox \"5a8cdf3ff46b036a039609c69f43fd5814bd0125ca5a42f0750172581f68525e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dd6f7dfb345bc120ce302396a69d5f1bbc1065d2b268d5bc3eae670fdf09e427\"" Apr 14 01:10:59.184287 containerd[1584]: time="2026-04-14T01:10:59.184267060Z" level=info msg="StartContainer for \"dd6f7dfb345bc120ce302396a69d5f1bbc1065d2b268d5bc3eae670fdf09e427\"" Apr 14 01:10:59.271737 containerd[1584]: time="2026-04-14T01:10:59.271257072Z" level=info msg="StartContainer for \"dd6f7dfb345bc120ce302396a69d5f1bbc1065d2b268d5bc3eae670fdf09e427\" returns successfully" Apr 14 01:11:02.176516 kubelet[2666]: E0414 01:11:02.175905 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:02.236002 kubelet[2666]: I0414 01:11:02.235871 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-z7nz8" podStartSLOduration=4.222976897 podStartE2EDuration="6.235858359s" podCreationTimestamp="2026-04-14 01:10:56 +0000 UTC" firstStartedPulling="2026-04-14 01:10:57.154129592 +0000 UTC m=+7.399007870" lastFinishedPulling="2026-04-14 01:10:59.167011053 +0000 UTC m=+9.411889332" observedRunningTime="2026-04-14 01:10:59.929057817 +0000 UTC m=+10.173936106" watchObservedRunningTime="2026-04-14 01:11:02.235858359 +0000 UTC m=+12.480736649" Apr 14 01:11:02.918114 kubelet[2666]: E0414 01:11:02.917952 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:04.805639 sudo[1770]: pam_unix(sudo:session): session closed for user root Apr 14 01:11:04.807918 sshd[1763]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:04.813753 systemd[1]: sshd@6-10.0.0.8:22-10.0.0.1:59616.service: Deactivated successfully. Apr 14 01:11:04.828937 systemd[1]: session-7.scope: Deactivated successfully. Apr 14 01:11:04.829877 systemd-logind[1568]: Session 7 logged out. Waiting for processes to exit. Apr 14 01:11:04.834412 systemd-logind[1568]: Removed session 7. Apr 14 01:11:06.781561 kubelet[2666]: E0414 01:11:06.781388 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mzw4" podUID="241647e8-70b4-4fa4-aa60-9aba8555b739" Apr 14 01:11:06.784008 kubelet[2666]: I0414 01:11:06.783960 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8c902887-1692-4ac6-9f66-4c09d78225a1-cni-log-dir\") pod \"calico-node-dlffs\" (UID: \"8c902887-1692-4ac6-9f66-4c09d78225a1\") " pod="calico-system/calico-node-dlffs" Apr 14 01:11:06.784461 kubelet[2666]: I0414 01:11:06.784409 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8c902887-1692-4ac6-9f66-4c09d78225a1-policysync\") pod \"calico-node-dlffs\" (UID: \"8c902887-1692-4ac6-9f66-4c09d78225a1\") " pod="calico-system/calico-node-dlffs" Apr 14 01:11:06.784857 kubelet[2666]: I0414 01:11:06.784773 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqk9b\" (UniqueName: \"kubernetes.io/projected/8c902887-1692-4ac6-9f66-4c09d78225a1-kube-api-access-jqk9b\") pod \"calico-node-dlffs\" (UID: \"8c902887-1692-4ac6-9f66-4c09d78225a1\") " pod="calico-system/calico-node-dlffs" Apr 14 01:11:06.784857 kubelet[2666]: I0414 01:11:06.784813 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55741b32-cccc-4ddc-ba76-151d49a0a17b-tigera-ca-bundle\") pod \"calico-typha-744f4bc6c5-zfnmn\" (UID: \"55741b32-cccc-4ddc-ba76-151d49a0a17b\") " pod="calico-system/calico-typha-744f4bc6c5-zfnmn" Apr 14 01:11:06.785373 kubelet[2666]: I0414 01:11:06.784939 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9h5g\" (UniqueName: \"kubernetes.io/projected/55741b32-cccc-4ddc-ba76-151d49a0a17b-kube-api-access-t9h5g\") pod \"calico-typha-744f4bc6c5-zfnmn\" (UID: \"55741b32-cccc-4ddc-ba76-151d49a0a17b\") " pod="calico-system/calico-typha-744f4bc6c5-zfnmn" Apr 14 01:11:06.785373 kubelet[2666]: I0414 01:11:06.784964 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8c902887-1692-4ac6-9f66-4c09d78225a1-cni-bin-dir\") pod \"calico-node-dlffs\" (UID: \"8c902887-1692-4ac6-9f66-4c09d78225a1\") " pod="calico-system/calico-node-dlffs" Apr 14 01:11:06.785373 kubelet[2666]: I0414 01:11:06.784982 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/8c902887-1692-4ac6-9f66-4c09d78225a1-nodeproc\") pod \"calico-node-dlffs\" (UID: \"8c902887-1692-4ac6-9f66-4c09d78225a1\") " pod="calico-system/calico-node-dlffs" Apr 14 01:11:06.785373 kubelet[2666]: I0414 01:11:06.785001 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c902887-1692-4ac6-9f66-4c09d78225a1-tigera-ca-bundle\") pod \"calico-node-dlffs\" (UID: \"8c902887-1692-4ac6-9f66-4c09d78225a1\") " pod="calico-system/calico-node-dlffs" Apr 14 01:11:06.785373 kubelet[2666]: I0414 01:11:06.785014 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8c902887-1692-4ac6-9f66-4c09d78225a1-cni-net-dir\") pod \"calico-node-dlffs\" (UID: \"8c902887-1692-4ac6-9f66-4c09d78225a1\") " pod="calico-system/calico-node-dlffs" Apr 14 01:11:06.785784 kubelet[2666]: I0414 01:11:06.785026 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8c902887-1692-4ac6-9f66-4c09d78225a1-flexvol-driver-host\") pod \"calico-node-dlffs\" (UID: \"8c902887-1692-4ac6-9f66-4c09d78225a1\") " pod="calico-system/calico-node-dlffs" Apr 14 01:11:06.785784 kubelet[2666]: I0414 01:11:06.785037 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/8c902887-1692-4ac6-9f66-4c09d78225a1-sys-fs\") pod \"calico-node-dlffs\" (UID: \"8c902887-1692-4ac6-9f66-4c09d78225a1\") " pod="calico-system/calico-node-dlffs" Apr 14 01:11:06.785784 kubelet[2666]: I0414 01:11:06.785049 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8c902887-1692-4ac6-9f66-4c09d78225a1-xtables-lock\") pod \"calico-node-dlffs\" (UID: \"8c902887-1692-4ac6-9f66-4c09d78225a1\") " pod="calico-system/calico-node-dlffs" Apr 14 01:11:06.785784 kubelet[2666]: I0414 01:11:06.785063 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/55741b32-cccc-4ddc-ba76-151d49a0a17b-typha-certs\") pod \"calico-typha-744f4bc6c5-zfnmn\" (UID: \"55741b32-cccc-4ddc-ba76-151d49a0a17b\") " pod="calico-system/calico-typha-744f4bc6c5-zfnmn" Apr 14 01:11:06.785784 kubelet[2666]: I0414 01:11:06.785114 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/8c902887-1692-4ac6-9f66-4c09d78225a1-bpffs\") pod \"calico-node-dlffs\" (UID: \"8c902887-1692-4ac6-9f66-4c09d78225a1\") " pod="calico-system/calico-node-dlffs" Apr 14 01:11:06.785865 kubelet[2666]: I0414 01:11:06.785129 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8c902887-1692-4ac6-9f66-4c09d78225a1-node-certs\") pod \"calico-node-dlffs\" (UID: \"8c902887-1692-4ac6-9f66-4c09d78225a1\") " pod="calico-system/calico-node-dlffs" Apr 14 01:11:06.785865 kubelet[2666]: I0414 01:11:06.785164 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8c902887-1692-4ac6-9f66-4c09d78225a1-var-run-calico\") pod \"calico-node-dlffs\" (UID: \"8c902887-1692-4ac6-9f66-4c09d78225a1\") " pod="calico-system/calico-node-dlffs" Apr 14 01:11:06.785865 kubelet[2666]: I0414 01:11:06.785270 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8c902887-1692-4ac6-9f66-4c09d78225a1-var-lib-calico\") pod \"calico-node-dlffs\" (UID: \"8c902887-1692-4ac6-9f66-4c09d78225a1\") " pod="calico-system/calico-node-dlffs" Apr 14 01:11:06.785865 kubelet[2666]: I0414 01:11:06.785307 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8c902887-1692-4ac6-9f66-4c09d78225a1-lib-modules\") pod \"calico-node-dlffs\" (UID: \"8c902887-1692-4ac6-9f66-4c09d78225a1\") " pod="calico-system/calico-node-dlffs" Apr 14 01:11:06.886605 kubelet[2666]: I0414 01:11:06.886022 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/241647e8-70b4-4fa4-aa60-9aba8555b739-registration-dir\") pod \"csi-node-driver-5mzw4\" (UID: \"241647e8-70b4-4fa4-aa60-9aba8555b739\") " pod="calico-system/csi-node-driver-5mzw4" Apr 14 01:11:06.886605 kubelet[2666]: I0414 01:11:06.886418 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/241647e8-70b4-4fa4-aa60-9aba8555b739-varrun\") pod \"csi-node-driver-5mzw4\" (UID: \"241647e8-70b4-4fa4-aa60-9aba8555b739\") " pod="calico-system/csi-node-driver-5mzw4" Apr 14 01:11:06.887215 kubelet[2666]: I0414 01:11:06.886557 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd5gk\" (UniqueName: \"kubernetes.io/projected/241647e8-70b4-4fa4-aa60-9aba8555b739-kube-api-access-wd5gk\") pod \"csi-node-driver-5mzw4\" (UID: \"241647e8-70b4-4fa4-aa60-9aba8555b739\") " pod="calico-system/csi-node-driver-5mzw4" Apr 14 01:11:06.888414 kubelet[2666]: I0414 01:11:06.887270 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/241647e8-70b4-4fa4-aa60-9aba8555b739-socket-dir\") pod \"csi-node-driver-5mzw4\" (UID: \"241647e8-70b4-4fa4-aa60-9aba8555b739\") " pod="calico-system/csi-node-driver-5mzw4" Apr 14 01:11:06.888414 kubelet[2666]: I0414 01:11:06.887481 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/241647e8-70b4-4fa4-aa60-9aba8555b739-kubelet-dir\") pod \"csi-node-driver-5mzw4\" (UID: \"241647e8-70b4-4fa4-aa60-9aba8555b739\") " pod="calico-system/csi-node-driver-5mzw4" Apr 14 01:11:06.892906 kubelet[2666]: E0414 01:11:06.892663 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.892906 kubelet[2666]: W0414 01:11:06.892683 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.892906 kubelet[2666]: E0414 01:11:06.892712 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.895429 kubelet[2666]: E0414 01:11:06.895241 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.895429 kubelet[2666]: W0414 01:11:06.895265 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.895429 kubelet[2666]: E0414 01:11:06.895283 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.901434 kubelet[2666]: E0414 01:11:06.901403 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.901434 kubelet[2666]: W0414 01:11:06.901434 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.901574 kubelet[2666]: E0414 01:11:06.901452 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.904706 kubelet[2666]: E0414 01:11:06.904684 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.904706 kubelet[2666]: W0414 01:11:06.904704 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.904782 kubelet[2666]: E0414 01:11:06.904716 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.937437 kubelet[2666]: E0414 01:11:06.937162 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:06.938591 containerd[1584]: time="2026-04-14T01:11:06.938454564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-744f4bc6c5-zfnmn,Uid:55741b32-cccc-4ddc-ba76-151d49a0a17b,Namespace:calico-system,Attempt:0,}" Apr 14 01:11:06.978587 containerd[1584]: time="2026-04-14T01:11:06.978417128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dlffs,Uid:8c902887-1692-4ac6-9f66-4c09d78225a1,Namespace:calico-system,Attempt:0,}" Apr 14 01:11:06.982220 containerd[1584]: time="2026-04-14T01:11:06.982152397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:11:06.982385 containerd[1584]: time="2026-04-14T01:11:06.982234656Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:11:06.982385 containerd[1584]: time="2026-04-14T01:11:06.982256227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:06.982549 containerd[1584]: time="2026-04-14T01:11:06.982414023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:06.989409 kubelet[2666]: E0414 01:11:06.988705 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.989409 kubelet[2666]: W0414 01:11:06.988723 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.989409 kubelet[2666]: E0414 01:11:06.988744 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.989409 kubelet[2666]: E0414 01:11:06.988964 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.989409 kubelet[2666]: W0414 01:11:06.988971 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.989409 kubelet[2666]: E0414 01:11:06.988980 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.989409 kubelet[2666]: E0414 01:11:06.989308 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.989409 kubelet[2666]: W0414 01:11:06.989350 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.989409 kubelet[2666]: E0414 01:11:06.989386 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.989778 kubelet[2666]: E0414 01:11:06.989707 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.989778 kubelet[2666]: W0414 01:11:06.989713 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.989778 kubelet[2666]: E0414 01:11:06.989720 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.991799 kubelet[2666]: E0414 01:11:06.990456 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.991799 kubelet[2666]: W0414 01:11:06.990467 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.991799 kubelet[2666]: E0414 01:11:06.990483 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.991799 kubelet[2666]: E0414 01:11:06.990770 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.991799 kubelet[2666]: W0414 01:11:06.990775 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.991799 kubelet[2666]: E0414 01:11:06.990782 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.991799 kubelet[2666]: E0414 01:11:06.991032 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.991799 kubelet[2666]: W0414 01:11:06.991037 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.991799 kubelet[2666]: E0414 01:11:06.991042 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.991799 kubelet[2666]: E0414 01:11:06.991292 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.992893 kubelet[2666]: W0414 01:11:06.991297 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.992893 kubelet[2666]: E0414 01:11:06.991303 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.992893 kubelet[2666]: E0414 01:11:06.991575 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.992893 kubelet[2666]: W0414 01:11:06.991581 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.992893 kubelet[2666]: E0414 01:11:06.991587 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.992893 kubelet[2666]: E0414 01:11:06.991876 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.992893 kubelet[2666]: W0414 01:11:06.991882 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.992893 kubelet[2666]: E0414 01:11:06.991888 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.992893 kubelet[2666]: E0414 01:11:06.992180 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.992893 kubelet[2666]: W0414 01:11:06.992185 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.994083 kubelet[2666]: E0414 01:11:06.992191 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.994083 kubelet[2666]: E0414 01:11:06.992571 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.994083 kubelet[2666]: W0414 01:11:06.992578 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.994083 kubelet[2666]: E0414 01:11:06.992586 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.994083 kubelet[2666]: E0414 01:11:06.992986 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.994083 kubelet[2666]: W0414 01:11:06.992992 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.994083 kubelet[2666]: E0414 01:11:06.992999 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.994083 kubelet[2666]: E0414 01:11:06.993700 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.994083 kubelet[2666]: W0414 01:11:06.993709 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.994083 kubelet[2666]: E0414 01:11:06.993717 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.996126 kubelet[2666]: E0414 01:11:06.993983 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.996126 kubelet[2666]: W0414 01:11:06.993990 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.996126 kubelet[2666]: E0414 01:11:06.993996 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.996126 kubelet[2666]: E0414 01:11:06.996092 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.996126 kubelet[2666]: W0414 01:11:06.996109 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.996472 kubelet[2666]: E0414 01:11:06.996125 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.997911 kubelet[2666]: E0414 01:11:06.997582 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.997911 kubelet[2666]: W0414 01:11:06.997647 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.997911 kubelet[2666]: E0414 01:11:06.997667 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.998443 kubelet[2666]: E0414 01:11:06.998410 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.998443 kubelet[2666]: W0414 01:11:06.998441 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.998443 kubelet[2666]: E0414 01:11:06.998456 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.998729 kubelet[2666]: E0414 01:11:06.998698 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.998729 kubelet[2666]: W0414 01:11:06.998711 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.998729 kubelet[2666]: E0414 01:11:06.998723 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.998990 kubelet[2666]: E0414 01:11:06.998943 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.998990 kubelet[2666]: W0414 01:11:06.998962 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.998990 kubelet[2666]: E0414 01:11:06.998972 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:06.999286 kubelet[2666]: E0414 01:11:06.999241 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:06.999286 kubelet[2666]: W0414 01:11:06.999261 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:06.999286 kubelet[2666]: E0414 01:11:06.999270 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:07.000126 kubelet[2666]: E0414 01:11:06.999999 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:07.000126 kubelet[2666]: W0414 01:11:07.000094 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:07.000126 kubelet[2666]: E0414 01:11:07.000123 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:07.000658 kubelet[2666]: E0414 01:11:07.000611 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:07.000658 kubelet[2666]: W0414 01:11:07.000639 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:07.000658 kubelet[2666]: E0414 01:11:07.000651 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:07.001511 kubelet[2666]: E0414 01:11:07.001476 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:07.001511 kubelet[2666]: W0414 01:11:07.001503 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:07.001596 kubelet[2666]: E0414 01:11:07.001514 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:07.003198 kubelet[2666]: E0414 01:11:07.002034 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:07.003198 kubelet[2666]: W0414 01:11:07.002046 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:07.003198 kubelet[2666]: E0414 01:11:07.002057 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:07.017652 kubelet[2666]: E0414 01:11:07.017235 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:07.017652 kubelet[2666]: W0414 01:11:07.017277 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:07.017652 kubelet[2666]: E0414 01:11:07.017308 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:07.027642 containerd[1584]: time="2026-04-14T01:11:07.027503059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:11:07.027642 containerd[1584]: time="2026-04-14T01:11:07.027583824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:11:07.027642 containerd[1584]: time="2026-04-14T01:11:07.027602310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:07.028034 containerd[1584]: time="2026-04-14T01:11:07.027710688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:07.055873 containerd[1584]: time="2026-04-14T01:11:07.055711110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-744f4bc6c5-zfnmn,Uid:55741b32-cccc-4ddc-ba76-151d49a0a17b,Namespace:calico-system,Attempt:0,} returns sandbox id \"21b52ef75bde127870459391ac9584068655db486bfaa6a62e34c300df963d02\"" Apr 14 01:11:07.059452 kubelet[2666]: E0414 01:11:07.059394 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:07.060715 containerd[1584]: time="2026-04-14T01:11:07.060682802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 14 01:11:07.074625 containerd[1584]: time="2026-04-14T01:11:07.074580185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dlffs,Uid:8c902887-1692-4ac6-9f66-4c09d78225a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"718ee6f5981a8af642b0456e0d122e1fdfc12aaa6bae2326d70f6758aa7cd14d\"" Apr 14 01:11:08.734077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3980165387.mount: Deactivated successfully. Apr 14 01:11:08.868580 kubelet[2666]: E0414 01:11:08.868211 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mzw4" podUID="241647e8-70b4-4fa4-aa60-9aba8555b739" Apr 14 01:11:09.517263 containerd[1584]: time="2026-04-14T01:11:09.517113537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:09.518010 containerd[1584]: time="2026-04-14T01:11:09.517960940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 14 01:11:09.519212 containerd[1584]: time="2026-04-14T01:11:09.519173104Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:09.523697 containerd[1584]: time="2026-04-14T01:11:09.523460791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:09.524603 containerd[1584]: time="2026-04-14T01:11:09.524534635Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.46381748s" Apr 14 01:11:09.524603 containerd[1584]: time="2026-04-14T01:11:09.524592632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 14 01:11:09.525798 containerd[1584]: time="2026-04-14T01:11:09.525744628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 14 01:11:09.539497 containerd[1584]: time="2026-04-14T01:11:09.539456822Z" level=info msg="CreateContainer within sandbox \"21b52ef75bde127870459391ac9584068655db486bfaa6a62e34c300df963d02\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 14 01:11:09.555222 containerd[1584]: time="2026-04-14T01:11:09.555164332Z" level=info msg="CreateContainer within sandbox \"21b52ef75bde127870459391ac9584068655db486bfaa6a62e34c300df963d02\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ecec18fe4c812024ae7c779b031db93da94d9bec1352ba8bd21595ba2a793afb\"" Apr 14 01:11:09.557066 containerd[1584]: time="2026-04-14T01:11:09.556987320Z" level=info msg="StartContainer for \"ecec18fe4c812024ae7c779b031db93da94d9bec1352ba8bd21595ba2a793afb\"" Apr 14 01:11:09.663234 containerd[1584]: time="2026-04-14T01:11:09.662996549Z" level=info msg="StartContainer for \"ecec18fe4c812024ae7c779b031db93da94d9bec1352ba8bd21595ba2a793afb\" returns successfully" Apr 14 01:11:09.688457 update_engine[1570]: I20260414 01:11:09.687882 1570 update_attempter.cc:509] Updating boot flags... Apr 14 01:11:09.732578 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3252) Apr 14 01:11:09.825872 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 31 scanned by (udev-worker) (3251) Apr 14 01:11:09.941378 kubelet[2666]: E0414 01:11:09.940589 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:09.950469 kubelet[2666]: I0414 01:11:09.949707 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-744f4bc6c5-zfnmn" podStartSLOduration=1.484246268 podStartE2EDuration="3.949690949s" podCreationTimestamp="2026-04-14 01:11:06 +0000 UTC" firstStartedPulling="2026-04-14 01:11:07.060171333 +0000 UTC m=+17.305049612" lastFinishedPulling="2026-04-14 01:11:09.525616005 +0000 UTC m=+19.770494293" observedRunningTime="2026-04-14 01:11:09.949382283 +0000 UTC m=+20.194260579" watchObservedRunningTime="2026-04-14 01:11:09.949690949 +0000 UTC m=+20.194569239" Apr 14 01:11:10.012197 kubelet[2666]: E0414 01:11:10.012131 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.012197 kubelet[2666]: W0414 01:11:10.012167 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.013193 kubelet[2666]: E0414 01:11:10.013105 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.013635 kubelet[2666]: E0414 01:11:10.013592 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.013635 kubelet[2666]: W0414 01:11:10.013619 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.013689 kubelet[2666]: E0414 01:11:10.013636 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.014065 kubelet[2666]: E0414 01:11:10.014045 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.014098 kubelet[2666]: W0414 01:11:10.014066 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.014098 kubelet[2666]: E0414 01:11:10.014075 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.014474 kubelet[2666]: E0414 01:11:10.014455 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.014499 kubelet[2666]: W0414 01:11:10.014477 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.014499 kubelet[2666]: E0414 01:11:10.014488 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.014776 kubelet[2666]: E0414 01:11:10.014759 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.014800 kubelet[2666]: W0414 01:11:10.014779 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.014800 kubelet[2666]: E0414 01:11:10.014787 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.015012 kubelet[2666]: E0414 01:11:10.014992 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.015039 kubelet[2666]: W0414 01:11:10.015012 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.015039 kubelet[2666]: E0414 01:11:10.015022 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.015274 kubelet[2666]: E0414 01:11:10.015255 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.015297 kubelet[2666]: W0414 01:11:10.015275 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.015297 kubelet[2666]: E0414 01:11:10.015283 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.015530 kubelet[2666]: E0414 01:11:10.015512 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.015555 kubelet[2666]: W0414 01:11:10.015531 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.015555 kubelet[2666]: E0414 01:11:10.015539 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.015807 kubelet[2666]: E0414 01:11:10.015771 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.015807 kubelet[2666]: W0414 01:11:10.015794 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.015807 kubelet[2666]: E0414 01:11:10.015801 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.016035 kubelet[2666]: E0414 01:11:10.016003 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.016035 kubelet[2666]: W0414 01:11:10.016026 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.016072 kubelet[2666]: E0414 01:11:10.016043 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.016289 kubelet[2666]: E0414 01:11:10.016256 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.016289 kubelet[2666]: W0414 01:11:10.016280 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.016356 kubelet[2666]: E0414 01:11:10.016288 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.016535 kubelet[2666]: E0414 01:11:10.016517 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.016558 kubelet[2666]: W0414 01:11:10.016536 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.016558 kubelet[2666]: E0414 01:11:10.016544 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.016866 kubelet[2666]: E0414 01:11:10.016831 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.016866 kubelet[2666]: W0414 01:11:10.016855 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.016866 kubelet[2666]: E0414 01:11:10.016863 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.017091 kubelet[2666]: E0414 01:11:10.017059 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.017091 kubelet[2666]: W0414 01:11:10.017082 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.017091 kubelet[2666]: E0414 01:11:10.017089 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.017359 kubelet[2666]: E0414 01:11:10.017296 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.017359 kubelet[2666]: W0414 01:11:10.017347 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.017359 kubelet[2666]: E0414 01:11:10.017356 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.017702 kubelet[2666]: E0414 01:11:10.017668 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.017702 kubelet[2666]: W0414 01:11:10.017692 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.017734 kubelet[2666]: E0414 01:11:10.017701 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.018016 kubelet[2666]: E0414 01:11:10.017983 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.018016 kubelet[2666]: W0414 01:11:10.018005 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.018016 kubelet[2666]: E0414 01:11:10.018013 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.018372 kubelet[2666]: E0414 01:11:10.018347 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.018372 kubelet[2666]: W0414 01:11:10.018367 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.018372 kubelet[2666]: E0414 01:11:10.018376 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.018699 kubelet[2666]: E0414 01:11:10.018674 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.018699 kubelet[2666]: W0414 01:11:10.018693 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.018767 kubelet[2666]: E0414 01:11:10.018702 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.018960 kubelet[2666]: E0414 01:11:10.018938 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.018960 kubelet[2666]: W0414 01:11:10.018958 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.019013 kubelet[2666]: E0414 01:11:10.018968 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.019188 kubelet[2666]: E0414 01:11:10.019170 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.019212 kubelet[2666]: W0414 01:11:10.019188 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.019212 kubelet[2666]: E0414 01:11:10.019209 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.019516 kubelet[2666]: E0414 01:11:10.019498 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.019539 kubelet[2666]: W0414 01:11:10.019517 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.019539 kubelet[2666]: E0414 01:11:10.019526 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.019756 kubelet[2666]: E0414 01:11:10.019735 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.019756 kubelet[2666]: W0414 01:11:10.019754 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.019794 kubelet[2666]: E0414 01:11:10.019762 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.020017 kubelet[2666]: E0414 01:11:10.019997 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.020036 kubelet[2666]: W0414 01:11:10.020016 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.020036 kubelet[2666]: E0414 01:11:10.020024 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.020431 kubelet[2666]: E0414 01:11:10.020410 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.020451 kubelet[2666]: W0414 01:11:10.020433 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.020451 kubelet[2666]: E0414 01:11:10.020444 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.020656 kubelet[2666]: E0414 01:11:10.020639 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.020676 kubelet[2666]: W0414 01:11:10.020657 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.020676 kubelet[2666]: E0414 01:11:10.020665 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.020961 kubelet[2666]: E0414 01:11:10.020924 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.020961 kubelet[2666]: W0414 01:11:10.020949 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.020961 kubelet[2666]: E0414 01:11:10.020958 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.021370 kubelet[2666]: E0414 01:11:10.021341 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.021370 kubelet[2666]: W0414 01:11:10.021367 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.021468 kubelet[2666]: E0414 01:11:10.021379 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.021635 kubelet[2666]: E0414 01:11:10.021604 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.021635 kubelet[2666]: W0414 01:11:10.021619 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.021635 kubelet[2666]: E0414 01:11:10.021626 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.021840 kubelet[2666]: E0414 01:11:10.021818 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.021840 kubelet[2666]: W0414 01:11:10.021833 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.021840 kubelet[2666]: E0414 01:11:10.021839 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.022020 kubelet[2666]: E0414 01:11:10.021999 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.022020 kubelet[2666]: W0414 01:11:10.022014 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.022077 kubelet[2666]: E0414 01:11:10.022019 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.022296 kubelet[2666]: E0414 01:11:10.022276 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.022296 kubelet[2666]: W0414 01:11:10.022291 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.022389 kubelet[2666]: E0414 01:11:10.022298 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.022675 kubelet[2666]: E0414 01:11:10.022635 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:10.022675 kubelet[2666]: W0414 01:11:10.022655 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:10.022675 kubelet[2666]: E0414 01:11:10.022662 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:10.868218 kubelet[2666]: E0414 01:11:10.868068 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mzw4" podUID="241647e8-70b4-4fa4-aa60-9aba8555b739" Apr 14 01:11:10.942485 kubelet[2666]: I0414 01:11:10.942367 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 14 01:11:10.943059 kubelet[2666]: E0414 01:11:10.942914 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:11.028690 kubelet[2666]: E0414 01:11:11.028590 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.028690 kubelet[2666]: W0414 01:11:11.028627 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.028690 kubelet[2666]: E0414 01:11:11.028656 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.029711 kubelet[2666]: E0414 01:11:11.029559 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.029711 kubelet[2666]: W0414 01:11:11.029680 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.029711 kubelet[2666]: E0414 01:11:11.029709 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.030189 kubelet[2666]: E0414 01:11:11.030112 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.030189 kubelet[2666]: W0414 01:11:11.030134 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.030189 kubelet[2666]: E0414 01:11:11.030146 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.030541 kubelet[2666]: E0414 01:11:11.030501 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.030565 kubelet[2666]: W0414 01:11:11.030548 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.030565 kubelet[2666]: E0414 01:11:11.030556 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.031267 kubelet[2666]: E0414 01:11:11.031224 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.031267 kubelet[2666]: W0414 01:11:11.031246 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.031267 kubelet[2666]: E0414 01:11:11.031263 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.031498 kubelet[2666]: E0414 01:11:11.031478 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.031498 kubelet[2666]: W0414 01:11:11.031492 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.031498 kubelet[2666]: E0414 01:11:11.031498 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.031701 kubelet[2666]: E0414 01:11:11.031663 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.031701 kubelet[2666]: W0414 01:11:11.031680 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.031701 kubelet[2666]: E0414 01:11:11.031685 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.031925 kubelet[2666]: E0414 01:11:11.031900 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.031925 kubelet[2666]: W0414 01:11:11.031921 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.031925 kubelet[2666]: E0414 01:11:11.031927 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.032175 kubelet[2666]: E0414 01:11:11.032149 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.032175 kubelet[2666]: W0414 01:11:11.032163 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.032175 kubelet[2666]: E0414 01:11:11.032169 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.032392 kubelet[2666]: E0414 01:11:11.032353 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.032392 kubelet[2666]: W0414 01:11:11.032370 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.032392 kubelet[2666]: E0414 01:11:11.032375 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.032534 kubelet[2666]: E0414 01:11:11.032509 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.032534 kubelet[2666]: W0414 01:11:11.032522 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.032534 kubelet[2666]: E0414 01:11:11.032527 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.032700 kubelet[2666]: E0414 01:11:11.032673 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.032700 kubelet[2666]: W0414 01:11:11.032687 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.032700 kubelet[2666]: E0414 01:11:11.032692 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.032968 kubelet[2666]: E0414 01:11:11.032941 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.032968 kubelet[2666]: W0414 01:11:11.032959 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.032968 kubelet[2666]: E0414 01:11:11.032964 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.033263 kubelet[2666]: E0414 01:11:11.033239 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.033263 kubelet[2666]: W0414 01:11:11.033253 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.033263 kubelet[2666]: E0414 01:11:11.033259 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.033492 kubelet[2666]: E0414 01:11:11.033466 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.033492 kubelet[2666]: W0414 01:11:11.033480 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.033492 kubelet[2666]: E0414 01:11:11.033485 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.033733 kubelet[2666]: E0414 01:11:11.033703 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.033733 kubelet[2666]: W0414 01:11:11.033717 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.033733 kubelet[2666]: E0414 01:11:11.033722 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.033978 kubelet[2666]: E0414 01:11:11.033951 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.033978 kubelet[2666]: W0414 01:11:11.033964 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.033978 kubelet[2666]: E0414 01:11:11.033970 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.034203 kubelet[2666]: E0414 01:11:11.034177 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.034203 kubelet[2666]: W0414 01:11:11.034191 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.034203 kubelet[2666]: E0414 01:11:11.034197 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.034458 kubelet[2666]: E0414 01:11:11.034436 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.034458 kubelet[2666]: W0414 01:11:11.034451 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.034458 kubelet[2666]: E0414 01:11:11.034456 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.034636 kubelet[2666]: E0414 01:11:11.034607 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.034636 kubelet[2666]: W0414 01:11:11.034621 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.034636 kubelet[2666]: E0414 01:11:11.034626 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.034908 kubelet[2666]: E0414 01:11:11.034878 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.034908 kubelet[2666]: W0414 01:11:11.034892 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.034908 kubelet[2666]: E0414 01:11:11.034897 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.035178 kubelet[2666]: E0414 01:11:11.035146 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.035178 kubelet[2666]: W0414 01:11:11.035176 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.035246 kubelet[2666]: E0414 01:11:11.035191 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.035496 kubelet[2666]: E0414 01:11:11.035470 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.035496 kubelet[2666]: W0414 01:11:11.035487 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.035496 kubelet[2666]: E0414 01:11:11.035494 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.035697 kubelet[2666]: E0414 01:11:11.035668 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.035697 kubelet[2666]: W0414 01:11:11.035683 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.035697 kubelet[2666]: E0414 01:11:11.035689 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.035992 kubelet[2666]: E0414 01:11:11.035966 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.035992 kubelet[2666]: W0414 01:11:11.035981 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.035992 kubelet[2666]: E0414 01:11:11.035986 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.036214 kubelet[2666]: E0414 01:11:11.036186 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.036214 kubelet[2666]: W0414 01:11:11.036206 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.036285 kubelet[2666]: E0414 01:11:11.036216 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.036502 kubelet[2666]: E0414 01:11:11.036476 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.036502 kubelet[2666]: W0414 01:11:11.036498 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.036583 kubelet[2666]: E0414 01:11:11.036508 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.036780 kubelet[2666]: E0414 01:11:11.036753 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.036780 kubelet[2666]: W0414 01:11:11.036769 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.036780 kubelet[2666]: E0414 01:11:11.036776 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.037086 kubelet[2666]: E0414 01:11:11.037055 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.037086 kubelet[2666]: W0414 01:11:11.037084 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.037158 kubelet[2666]: E0414 01:11:11.037097 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.037411 kubelet[2666]: E0414 01:11:11.037386 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.037454 kubelet[2666]: W0414 01:11:11.037413 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.037454 kubelet[2666]: E0414 01:11:11.037423 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.037744 kubelet[2666]: E0414 01:11:11.037724 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.037768 kubelet[2666]: W0414 01:11:11.037746 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.037768 kubelet[2666]: E0414 01:11:11.037761 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.038211 kubelet[2666]: E0414 01:11:11.038178 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.038229 kubelet[2666]: W0414 01:11:11.038216 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.038248 kubelet[2666]: E0414 01:11:11.038231 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.038686 kubelet[2666]: E0414 01:11:11.038636 2666 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 14 01:11:11.038686 kubelet[2666]: W0414 01:11:11.038669 2666 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 14 01:11:11.038686 kubelet[2666]: E0414 01:11:11.038681 2666 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 14 01:11:11.275716 containerd[1584]: time="2026-04-14T01:11:11.275477210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:11.276766 containerd[1584]: time="2026-04-14T01:11:11.276721640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 14 01:11:11.277884 containerd[1584]: time="2026-04-14T01:11:11.277783792Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:11.280231 containerd[1584]: time="2026-04-14T01:11:11.279973715Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:11.281400 containerd[1584]: time="2026-04-14T01:11:11.281251894Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.755461904s" Apr 14 01:11:11.281400 containerd[1584]: time="2026-04-14T01:11:11.281387711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 14 01:11:11.290717 containerd[1584]: time="2026-04-14T01:11:11.290604291Z" level=info msg="CreateContainer within sandbox \"718ee6f5981a8af642b0456e0d122e1fdfc12aaa6bae2326d70f6758aa7cd14d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 14 01:11:11.308758 containerd[1584]: time="2026-04-14T01:11:11.308537078Z" level=info msg="CreateContainer within sandbox \"718ee6f5981a8af642b0456e0d122e1fdfc12aaa6bae2326d70f6758aa7cd14d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1c8fd97b8bd0aea513d1194468a09dfdf0f75500e8c731b06f1e3ce7f0c14368\"" Apr 14 01:11:11.311371 containerd[1584]: time="2026-04-14T01:11:11.309878671Z" level=info msg="StartContainer for \"1c8fd97b8bd0aea513d1194468a09dfdf0f75500e8c731b06f1e3ce7f0c14368\"" Apr 14 01:11:11.373559 containerd[1584]: time="2026-04-14T01:11:11.373505354Z" level=info msg="StartContainer for \"1c8fd97b8bd0aea513d1194468a09dfdf0f75500e8c731b06f1e3ce7f0c14368\" returns successfully" Apr 14 01:11:11.415560 containerd[1584]: time="2026-04-14T01:11:11.414197735Z" level=info msg="shim disconnected" id=1c8fd97b8bd0aea513d1194468a09dfdf0f75500e8c731b06f1e3ce7f0c14368 namespace=k8s.io Apr 14 01:11:11.415560 containerd[1584]: time="2026-04-14T01:11:11.415557742Z" level=warning msg="cleaning up after shim disconnected" id=1c8fd97b8bd0aea513d1194468a09dfdf0f75500e8c731b06f1e3ce7f0c14368 namespace=k8s.io Apr 14 01:11:11.415560 containerd[1584]: time="2026-04-14T01:11:11.415568937Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 01:11:11.532359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c8fd97b8bd0aea513d1194468a09dfdf0f75500e8c731b06f1e3ce7f0c14368-rootfs.mount: Deactivated successfully. Apr 14 01:11:11.949902 containerd[1584]: time="2026-04-14T01:11:11.949691108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 14 01:11:12.868373 kubelet[2666]: E0414 01:11:12.868151 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mzw4" podUID="241647e8-70b4-4fa4-aa60-9aba8555b739" Apr 14 01:11:13.735376 kubelet[2666]: I0414 01:11:13.735233 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 14 01:11:13.735792 kubelet[2666]: E0414 01:11:13.735704 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:13.957886 kubelet[2666]: E0414 01:11:13.957812 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:14.868146 kubelet[2666]: E0414 01:11:14.868074 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mzw4" podUID="241647e8-70b4-4fa4-aa60-9aba8555b739" Apr 14 01:11:15.257767 kernel: hrtimer: interrupt took 5210859 ns Apr 14 01:11:16.868264 kubelet[2666]: E0414 01:11:16.868170 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mzw4" podUID="241647e8-70b4-4fa4-aa60-9aba8555b739" Apr 14 01:11:18.868736 kubelet[2666]: E0414 01:11:18.867774 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mzw4" podUID="241647e8-70b4-4fa4-aa60-9aba8555b739" Apr 14 01:11:20.543714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2400146920.mount: Deactivated successfully. Apr 14 01:11:20.757229 containerd[1584]: time="2026-04-14T01:11:20.757104180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:20.757930 containerd[1584]: time="2026-04-14T01:11:20.757869249Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 14 01:11:20.758640 containerd[1584]: time="2026-04-14T01:11:20.758602309Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:20.760437 containerd[1584]: time="2026-04-14T01:11:20.760392374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:20.761014 containerd[1584]: time="2026-04-14T01:11:20.760874772Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 8.810998322s" Apr 14 01:11:20.761206 containerd[1584]: time="2026-04-14T01:11:20.761128006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 14 01:11:20.766375 containerd[1584]: time="2026-04-14T01:11:20.766294069Z" level=info msg="CreateContainer within sandbox \"718ee6f5981a8af642b0456e0d122e1fdfc12aaa6bae2326d70f6758aa7cd14d\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 14 01:11:20.786286 containerd[1584]: time="2026-04-14T01:11:20.786225705Z" level=info msg="CreateContainer within sandbox \"718ee6f5981a8af642b0456e0d122e1fdfc12aaa6bae2326d70f6758aa7cd14d\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"6707cca38ef4137164c8434c632cff89acf95350b4560edd8eb8855f41137045\"" Apr 14 01:11:20.790269 containerd[1584]: time="2026-04-14T01:11:20.790221769Z" level=info msg="StartContainer for \"6707cca38ef4137164c8434c632cff89acf95350b4560edd8eb8855f41137045\"" Apr 14 01:11:20.867887 kubelet[2666]: E0414 01:11:20.867687 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mzw4" podUID="241647e8-70b4-4fa4-aa60-9aba8555b739" Apr 14 01:11:20.916615 containerd[1584]: time="2026-04-14T01:11:20.916382062Z" level=info msg="StartContainer for \"6707cca38ef4137164c8434c632cff89acf95350b4560edd8eb8855f41137045\" returns successfully" Apr 14 01:11:20.981545 containerd[1584]: time="2026-04-14T01:11:20.981480674Z" level=info msg="shim disconnected" id=6707cca38ef4137164c8434c632cff89acf95350b4560edd8eb8855f41137045 namespace=k8s.io Apr 14 01:11:20.981755 containerd[1584]: time="2026-04-14T01:11:20.981722822Z" level=warning msg="cleaning up after shim disconnected" id=6707cca38ef4137164c8434c632cff89acf95350b4560edd8eb8855f41137045 namespace=k8s.io Apr 14 01:11:20.981755 containerd[1584]: time="2026-04-14T01:11:20.981754293Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 01:11:21.544405 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6707cca38ef4137164c8434c632cff89acf95350b4560edd8eb8855f41137045-rootfs.mount: Deactivated successfully. Apr 14 01:11:21.986038 containerd[1584]: time="2026-04-14T01:11:21.985916399Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 14 01:11:22.868512 kubelet[2666]: E0414 01:11:22.868167 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mzw4" podUID="241647e8-70b4-4fa4-aa60-9aba8555b739" Apr 14 01:11:24.869222 kubelet[2666]: E0414 01:11:24.868816 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5mzw4" podUID="241647e8-70b4-4fa4-aa60-9aba8555b739" Apr 14 01:11:25.791365 containerd[1584]: time="2026-04-14T01:11:25.791229901Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:25.792260 containerd[1584]: time="2026-04-14T01:11:25.792184092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 14 01:11:25.793254 containerd[1584]: time="2026-04-14T01:11:25.793213332Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:25.795427 containerd[1584]: time="2026-04-14T01:11:25.795260950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:25.795988 containerd[1584]: time="2026-04-14T01:11:25.795926650Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.809910043s" Apr 14 01:11:25.796048 containerd[1584]: time="2026-04-14T01:11:25.796022947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 14 01:11:25.801524 containerd[1584]: time="2026-04-14T01:11:25.801380917Z" level=info msg="CreateContainer within sandbox \"718ee6f5981a8af642b0456e0d122e1fdfc12aaa6bae2326d70f6758aa7cd14d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 14 01:11:25.854883 containerd[1584]: time="2026-04-14T01:11:25.854491511Z" level=info msg="CreateContainer within sandbox \"718ee6f5981a8af642b0456e0d122e1fdfc12aaa6bae2326d70f6758aa7cd14d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d4c7996e2d73f02232f2976077a6a5710e4e6273c2b63a42f67a8b6ef11c920b\"" Apr 14 01:11:25.863453 containerd[1584]: time="2026-04-14T01:11:25.856841558Z" level=info msg="StartContainer for \"d4c7996e2d73f02232f2976077a6a5710e4e6273c2b63a42f67a8b6ef11c920b\"" Apr 14 01:11:25.935233 systemd[1]: run-containerd-runc-k8s.io-d4c7996e2d73f02232f2976077a6a5710e4e6273c2b63a42f67a8b6ef11c920b-runc.Y37K9D.mount: Deactivated successfully. Apr 14 01:11:25.963927 containerd[1584]: time="2026-04-14T01:11:25.963599274Z" level=info msg="StartContainer for \"d4c7996e2d73f02232f2976077a6a5710e4e6273c2b63a42f67a8b6ef11c920b\" returns successfully" Apr 14 01:11:26.405149 systemd[1]: Started sshd@7-10.0.0.8:22-10.0.0.1:47266.service - OpenSSH per-connection server daemon (10.0.0.1:47266). Apr 14 01:11:26.456064 sshd[3527]: Accepted publickey for core from 10.0.0.1 port 47266 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:26.457401 sshd[3527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:26.461453 systemd-logind[1568]: New session 8 of user core. Apr 14 01:11:26.468724 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 14 01:11:26.564974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4c7996e2d73f02232f2976077a6a5710e4e6273c2b63a42f67a8b6ef11c920b-rootfs.mount: Deactivated successfully. Apr 14 01:11:26.567143 kubelet[2666]: I0414 01:11:26.567091 2666 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 14 01:11:26.567553 containerd[1584]: time="2026-04-14T01:11:26.567361448Z" level=info msg="shim disconnected" id=d4c7996e2d73f02232f2976077a6a5710e4e6273c2b63a42f67a8b6ef11c920b namespace=k8s.io Apr 14 01:11:26.567553 containerd[1584]: time="2026-04-14T01:11:26.567396645Z" level=warning msg="cleaning up after shim disconnected" id=d4c7996e2d73f02232f2976077a6a5710e4e6273c2b63a42f67a8b6ef11c920b namespace=k8s.io Apr 14 01:11:26.567553 containerd[1584]: time="2026-04-14T01:11:26.567403135Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 14 01:11:26.652152 sshd[3527]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:26.655414 systemd[1]: sshd@7-10.0.0.8:22-10.0.0.1:47266.service: Deactivated successfully. Apr 14 01:11:26.659680 systemd[1]: session-8.scope: Deactivated successfully. Apr 14 01:11:26.661123 systemd-logind[1568]: Session 8 logged out. Waiting for processes to exit. Apr 14 01:11:26.662176 systemd-logind[1568]: Removed session 8. Apr 14 01:11:26.711822 kubelet[2666]: I0414 01:11:26.711513 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/be5517b6-6d7c-4af7-8c09-ffaa013ba114-calico-apiserver-certs\") pod \"calico-apiserver-858fc86974-6fm4q\" (UID: \"be5517b6-6d7c-4af7-8c09-ffaa013ba114\") " pod="calico-system/calico-apiserver-858fc86974-6fm4q" Apr 14 01:11:26.711822 kubelet[2666]: I0414 01:11:26.711680 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw4k8\" (UniqueName: \"kubernetes.io/projected/d332dd42-09a8-4567-aca9-70ecff2b60fc-kube-api-access-qw4k8\") pod \"coredns-674b8bbfcf-t9cp8\" (UID: \"d332dd42-09a8-4567-aca9-70ecff2b60fc\") " pod="kube-system/coredns-674b8bbfcf-t9cp8" Apr 14 01:11:26.711822 kubelet[2666]: I0414 01:11:26.711694 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad-nginx-config\") pod \"whisker-cf6f98489-m9nv8\" (UID: \"3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad\") " pod="calico-system/whisker-cf6f98489-m9nv8" Apr 14 01:11:26.711822 kubelet[2666]: I0414 01:11:26.711731 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad-whisker-backend-key-pair\") pod \"whisker-cf6f98489-m9nv8\" (UID: \"3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad\") " pod="calico-system/whisker-cf6f98489-m9nv8" Apr 14 01:11:26.711822 kubelet[2666]: I0414 01:11:26.711769 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0e971146-3f82-4810-b7e0-9307354ac58e-tigera-ca-bundle\") pod \"calico-kube-controllers-69f7f46b8c-2nl2l\" (UID: \"0e971146-3f82-4810-b7e0-9307354ac58e\") " pod="calico-system/calico-kube-controllers-69f7f46b8c-2nl2l" Apr 14 01:11:26.712431 kubelet[2666]: I0414 01:11:26.711785 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad-whisker-ca-bundle\") pod \"whisker-cf6f98489-m9nv8\" (UID: \"3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad\") " pod="calico-system/whisker-cf6f98489-m9nv8" Apr 14 01:11:26.712431 kubelet[2666]: I0414 01:11:26.711802 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-825w8\" (UniqueName: \"kubernetes.io/projected/5c3519a7-25ed-4cbf-8b3f-53ccd47f87a7-kube-api-access-825w8\") pod \"coredns-674b8bbfcf-pdxgp\" (UID: \"5c3519a7-25ed-4cbf-8b3f-53ccd47f87a7\") " pod="kube-system/coredns-674b8bbfcf-pdxgp" Apr 14 01:11:26.712431 kubelet[2666]: I0414 01:11:26.711818 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zp8h\" (UniqueName: \"kubernetes.io/projected/8cfb8838-233c-4923-a3d1-211c57385c00-kube-api-access-4zp8h\") pod \"calico-apiserver-858fc86974-lgjnc\" (UID: \"8cfb8838-233c-4923-a3d1-211c57385c00\") " pod="calico-system/calico-apiserver-858fc86974-lgjnc" Apr 14 01:11:26.712431 kubelet[2666]: I0414 01:11:26.711831 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6df9bea-e3c6-4c2c-952c-aaf1341b5033-config\") pod \"goldmane-5b85766d88-gpjhg\" (UID: \"c6df9bea-e3c6-4c2c-952c-aaf1341b5033\") " pod="calico-system/goldmane-5b85766d88-gpjhg" Apr 14 01:11:26.712431 kubelet[2666]: I0414 01:11:26.711852 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d332dd42-09a8-4567-aca9-70ecff2b60fc-config-volume\") pod \"coredns-674b8bbfcf-t9cp8\" (UID: \"d332dd42-09a8-4567-aca9-70ecff2b60fc\") " pod="kube-system/coredns-674b8bbfcf-t9cp8" Apr 14 01:11:26.712675 kubelet[2666]: I0414 01:11:26.711933 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c6df9bea-e3c6-4c2c-952c-aaf1341b5033-goldmane-key-pair\") pod \"goldmane-5b85766d88-gpjhg\" (UID: \"c6df9bea-e3c6-4c2c-952c-aaf1341b5033\") " pod="calico-system/goldmane-5b85766d88-gpjhg" Apr 14 01:11:26.712675 kubelet[2666]: I0414 01:11:26.711962 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bkjr\" (UniqueName: \"kubernetes.io/projected/3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad-kube-api-access-5bkjr\") pod \"whisker-cf6f98489-m9nv8\" (UID: \"3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad\") " pod="calico-system/whisker-cf6f98489-m9nv8" Apr 14 01:11:26.712675 kubelet[2666]: I0414 01:11:26.712124 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5c3519a7-25ed-4cbf-8b3f-53ccd47f87a7-config-volume\") pod \"coredns-674b8bbfcf-pdxgp\" (UID: \"5c3519a7-25ed-4cbf-8b3f-53ccd47f87a7\") " pod="kube-system/coredns-674b8bbfcf-pdxgp" Apr 14 01:11:26.712675 kubelet[2666]: I0414 01:11:26.712146 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmjgk\" (UniqueName: \"kubernetes.io/projected/0e971146-3f82-4810-b7e0-9307354ac58e-kube-api-access-bmjgk\") pod \"calico-kube-controllers-69f7f46b8c-2nl2l\" (UID: \"0e971146-3f82-4810-b7e0-9307354ac58e\") " pod="calico-system/calico-kube-controllers-69f7f46b8c-2nl2l" Apr 14 01:11:26.712675 kubelet[2666]: I0414 01:11:26.712189 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8cfb8838-233c-4923-a3d1-211c57385c00-calico-apiserver-certs\") pod \"calico-apiserver-858fc86974-lgjnc\" (UID: \"8cfb8838-233c-4923-a3d1-211c57385c00\") " pod="calico-system/calico-apiserver-858fc86974-lgjnc" Apr 14 01:11:26.712778 kubelet[2666]: I0414 01:11:26.712204 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6df9bea-e3c6-4c2c-952c-aaf1341b5033-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-gpjhg\" (UID: \"c6df9bea-e3c6-4c2c-952c-aaf1341b5033\") " pod="calico-system/goldmane-5b85766d88-gpjhg" Apr 14 01:11:26.712778 kubelet[2666]: I0414 01:11:26.712222 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs88r\" (UniqueName: \"kubernetes.io/projected/be5517b6-6d7c-4af7-8c09-ffaa013ba114-kube-api-access-rs88r\") pod \"calico-apiserver-858fc86974-6fm4q\" (UID: \"be5517b6-6d7c-4af7-8c09-ffaa013ba114\") " pod="calico-system/calico-apiserver-858fc86974-6fm4q" Apr 14 01:11:26.712778 kubelet[2666]: I0414 01:11:26.712245 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjj6h\" (UniqueName: \"kubernetes.io/projected/c6df9bea-e3c6-4c2c-952c-aaf1341b5033-kube-api-access-kjj6h\") pod \"goldmane-5b85766d88-gpjhg\" (UID: \"c6df9bea-e3c6-4c2c-952c-aaf1341b5033\") " pod="calico-system/goldmane-5b85766d88-gpjhg" Apr 14 01:11:26.871126 containerd[1584]: time="2026-04-14T01:11:26.871010660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5mzw4,Uid:241647e8-70b4-4fa4-aa60-9aba8555b739,Namespace:calico-system,Attempt:0,}" Apr 14 01:11:26.924991 kubelet[2666]: E0414 01:11:26.924691 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:26.924991 kubelet[2666]: E0414 01:11:26.924831 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:26.928549 containerd[1584]: time="2026-04-14T01:11:26.928433793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9cp8,Uid:d332dd42-09a8-4567-aca9-70ecff2b60fc,Namespace:kube-system,Attempt:0,}" Apr 14 01:11:26.930225 containerd[1584]: time="2026-04-14T01:11:26.929609927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pdxgp,Uid:5c3519a7-25ed-4cbf-8b3f-53ccd47f87a7,Namespace:kube-system,Attempt:0,}" Apr 14 01:11:26.932230 containerd[1584]: time="2026-04-14T01:11:26.932190638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-858fc86974-lgjnc,Uid:8cfb8838-233c-4923-a3d1-211c57385c00,Namespace:calico-system,Attempt:0,}" Apr 14 01:11:26.938039 containerd[1584]: time="2026-04-14T01:11:26.937868179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cf6f98489-m9nv8,Uid:3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad,Namespace:calico-system,Attempt:0,}" Apr 14 01:11:26.945116 containerd[1584]: time="2026-04-14T01:11:26.944964708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69f7f46b8c-2nl2l,Uid:0e971146-3f82-4810-b7e0-9307354ac58e,Namespace:calico-system,Attempt:0,}" Apr 14 01:11:26.945384 containerd[1584]: time="2026-04-14T01:11:26.945358622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-gpjhg,Uid:c6df9bea-e3c6-4c2c-952c-aaf1341b5033,Namespace:calico-system,Attempt:0,}" Apr 14 01:11:26.949363 containerd[1584]: time="2026-04-14T01:11:26.949253422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-858fc86974-6fm4q,Uid:be5517b6-6d7c-4af7-8c09-ffaa013ba114,Namespace:calico-system,Attempt:0,}" Apr 14 01:11:26.976725 containerd[1584]: time="2026-04-14T01:11:26.976635259Z" level=error msg="Failed to destroy network for sandbox \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:26.977114 containerd[1584]: time="2026-04-14T01:11:26.977058153Z" level=error msg="encountered an error cleaning up failed sandbox \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:26.977160 containerd[1584]: time="2026-04-14T01:11:26.977131969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5mzw4,Uid:241647e8-70b4-4fa4-aa60-9aba8555b739,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:26.983808 kubelet[2666]: E0414 01:11:26.983731 2666 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:26.983928 kubelet[2666]: E0414 01:11:26.983820 2666 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5mzw4" Apr 14 01:11:26.983928 kubelet[2666]: E0414 01:11:26.983845 2666 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5mzw4" Apr 14 01:11:26.983928 kubelet[2666]: E0414 01:11:26.983893 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5mzw4_calico-system(241647e8-70b4-4fa4-aa60-9aba8555b739)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5mzw4_calico-system(241647e8-70b4-4fa4-aa60-9aba8555b739)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5mzw4" podUID="241647e8-70b4-4fa4-aa60-9aba8555b739" Apr 14 01:11:27.054890 kubelet[2666]: I0414 01:11:27.054832 2666 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Apr 14 01:11:27.063443 containerd[1584]: time="2026-04-14T01:11:27.063254007Z" level=info msg="CreateContainer within sandbox \"718ee6f5981a8af642b0456e0d122e1fdfc12aaa6bae2326d70f6758aa7cd14d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 14 01:11:27.098084 containerd[1584]: time="2026-04-14T01:11:27.097936196Z" level=info msg="StopPodSandbox for \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\"" Apr 14 01:11:27.099583 containerd[1584]: time="2026-04-14T01:11:27.099147595Z" level=info msg="Ensure that sandbox 13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb in task-service has been cleanup successfully" Apr 14 01:11:27.109110 containerd[1584]: time="2026-04-14T01:11:27.108979635Z" level=info msg="CreateContainer within sandbox \"718ee6f5981a8af642b0456e0d122e1fdfc12aaa6bae2326d70f6758aa7cd14d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"15b962e5056b1c4b436b176424227ab3c03c6e6a84e742450471c54d1f2ab9be\"" Apr 14 01:11:27.112765 containerd[1584]: time="2026-04-14T01:11:27.111561173Z" level=info msg="StartContainer for \"15b962e5056b1c4b436b176424227ab3c03c6e6a84e742450471c54d1f2ab9be\"" Apr 14 01:11:27.121662 containerd[1584]: time="2026-04-14T01:11:27.121511862Z" level=error msg="Failed to destroy network for sandbox \"4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.123100 containerd[1584]: time="2026-04-14T01:11:27.123054652Z" level=error msg="encountered an error cleaning up failed sandbox \"4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.123278 containerd[1584]: time="2026-04-14T01:11:27.123259025Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-858fc86974-lgjnc,Uid:8cfb8838-233c-4923-a3d1-211c57385c00,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.125128 kubelet[2666]: E0414 01:11:27.123675 2666 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.125128 kubelet[2666]: E0414 01:11:27.123802 2666 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-858fc86974-lgjnc" Apr 14 01:11:27.125128 kubelet[2666]: E0414 01:11:27.123827 2666 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-858fc86974-lgjnc" Apr 14 01:11:27.125264 kubelet[2666]: E0414 01:11:27.123872 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-858fc86974-lgjnc_calico-system(8cfb8838-233c-4923-a3d1-211c57385c00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-858fc86974-lgjnc_calico-system(8cfb8838-233c-4923-a3d1-211c57385c00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-858fc86974-lgjnc" podUID="8cfb8838-233c-4923-a3d1-211c57385c00" Apr 14 01:11:27.162886 containerd[1584]: time="2026-04-14T01:11:27.162848346Z" level=error msg="Failed to destroy network for sandbox \"806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.163418 containerd[1584]: time="2026-04-14T01:11:27.163398106Z" level=error msg="encountered an error cleaning up failed sandbox \"806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.163511 containerd[1584]: time="2026-04-14T01:11:27.163496590Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pdxgp,Uid:5c3519a7-25ed-4cbf-8b3f-53ccd47f87a7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.164082 kubelet[2666]: E0414 01:11:27.163975 2666 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.164153 kubelet[2666]: E0414 01:11:27.164112 2666 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pdxgp" Apr 14 01:11:27.164385 kubelet[2666]: E0414 01:11:27.164166 2666 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pdxgp" Apr 14 01:11:27.164385 kubelet[2666]: E0414 01:11:27.164245 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pdxgp_kube-system(5c3519a7-25ed-4cbf-8b3f-53ccd47f87a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pdxgp_kube-system(5c3519a7-25ed-4cbf-8b3f-53ccd47f87a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pdxgp" podUID="5c3519a7-25ed-4cbf-8b3f-53ccd47f87a7" Apr 14 01:11:27.171862 containerd[1584]: time="2026-04-14T01:11:27.171699881Z" level=error msg="Failed to destroy network for sandbox \"a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.172307 containerd[1584]: time="2026-04-14T01:11:27.172286226Z" level=error msg="encountered an error cleaning up failed sandbox \"a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.172443 containerd[1584]: time="2026-04-14T01:11:27.172428219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9cp8,Uid:d332dd42-09a8-4567-aca9-70ecff2b60fc,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.172907 kubelet[2666]: E0414 01:11:27.172686 2666 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.172907 kubelet[2666]: E0414 01:11:27.172806 2666 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t9cp8" Apr 14 01:11:27.172907 kubelet[2666]: E0414 01:11:27.172834 2666 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t9cp8" Apr 14 01:11:27.172992 kubelet[2666]: E0414 01:11:27.172875 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-t9cp8_kube-system(d332dd42-09a8-4567-aca9-70ecff2b60fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-t9cp8_kube-system(d332dd42-09a8-4567-aca9-70ecff2b60fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-t9cp8" podUID="d332dd42-09a8-4567-aca9-70ecff2b60fc" Apr 14 01:11:27.178429 containerd[1584]: time="2026-04-14T01:11:27.175907638Z" level=error msg="Failed to destroy network for sandbox \"1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.181567 containerd[1584]: time="2026-04-14T01:11:27.181438402Z" level=error msg="encountered an error cleaning up failed sandbox \"1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.181567 containerd[1584]: time="2026-04-14T01:11:27.181488417Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69f7f46b8c-2nl2l,Uid:0e971146-3f82-4810-b7e0-9307354ac58e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.181758 kubelet[2666]: E0414 01:11:27.181732 2666 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.181797 kubelet[2666]: E0414 01:11:27.181779 2666 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69f7f46b8c-2nl2l" Apr 14 01:11:27.181824 kubelet[2666]: E0414 01:11:27.181798 2666 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69f7f46b8c-2nl2l" Apr 14 01:11:27.181872 kubelet[2666]: E0414 01:11:27.181833 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69f7f46b8c-2nl2l_calico-system(0e971146-3f82-4810-b7e0-9307354ac58e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69f7f46b8c-2nl2l_calico-system(0e971146-3f82-4810-b7e0-9307354ac58e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69f7f46b8c-2nl2l" podUID="0e971146-3f82-4810-b7e0-9307354ac58e" Apr 14 01:11:27.192343 containerd[1584]: time="2026-04-14T01:11:27.191936537Z" level=error msg="Failed to destroy network for sandbox \"6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.192627 containerd[1584]: time="2026-04-14T01:11:27.192579274Z" level=error msg="encountered an error cleaning up failed sandbox \"6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.192730 containerd[1584]: time="2026-04-14T01:11:27.192697597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cf6f98489-m9nv8,Uid:3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.193656 kubelet[2666]: E0414 01:11:27.193533 2666 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.193656 kubelet[2666]: E0414 01:11:27.193636 2666 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cf6f98489-m9nv8" Apr 14 01:11:27.193656 kubelet[2666]: E0414 01:11:27.193659 2666 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cf6f98489-m9nv8" Apr 14 01:11:27.193812 kubelet[2666]: E0414 01:11:27.193706 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-cf6f98489-m9nv8_calico-system(3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-cf6f98489-m9nv8_calico-system(3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-cf6f98489-m9nv8" podUID="3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad" Apr 14 01:11:27.202067 containerd[1584]: time="2026-04-14T01:11:27.201970283Z" level=error msg="StopPodSandbox for \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\" failed" error="failed to destroy network for sandbox \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.203525 kubelet[2666]: E0414 01:11:27.203389 2666 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Apr 14 01:11:27.203609 kubelet[2666]: E0414 01:11:27.203537 2666 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb"} Apr 14 01:11:27.203633 kubelet[2666]: E0414 01:11:27.203607 2666 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"241647e8-70b4-4fa4-aa60-9aba8555b739\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 14 01:11:27.203750 kubelet[2666]: E0414 01:11:27.203638 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"241647e8-70b4-4fa4-aa60-9aba8555b739\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5mzw4" podUID="241647e8-70b4-4fa4-aa60-9aba8555b739" Apr 14 01:11:27.232196 containerd[1584]: time="2026-04-14T01:11:27.232158598Z" level=error msg="Failed to destroy network for sandbox \"c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.233104 containerd[1584]: time="2026-04-14T01:11:27.233083336Z" level=error msg="encountered an error cleaning up failed sandbox \"c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.233360 containerd[1584]: time="2026-04-14T01:11:27.233301950Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-gpjhg,Uid:c6df9bea-e3c6-4c2c-952c-aaf1341b5033,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.233776 kubelet[2666]: E0414 01:11:27.233712 2666 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.233917 kubelet[2666]: E0414 01:11:27.233906 2666 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-gpjhg" Apr 14 01:11:27.234083 kubelet[2666]: E0414 01:11:27.234006 2666 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-gpjhg" Apr 14 01:11:27.234371 kubelet[2666]: E0414 01:11:27.234147 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-gpjhg_calico-system(c6df9bea-e3c6-4c2c-952c-aaf1341b5033)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-gpjhg_calico-system(c6df9bea-e3c6-4c2c-952c-aaf1341b5033)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-gpjhg" podUID="c6df9bea-e3c6-4c2c-952c-aaf1341b5033" Apr 14 01:11:27.234922 containerd[1584]: time="2026-04-14T01:11:27.234899187Z" level=error msg="Failed to destroy network for sandbox \"b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.235258 containerd[1584]: time="2026-04-14T01:11:27.235240967Z" level=error msg="encountered an error cleaning up failed sandbox \"b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.235386 containerd[1584]: time="2026-04-14T01:11:27.235314632Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-858fc86974-6fm4q,Uid:be5517b6-6d7c-4af7-8c09-ffaa013ba114,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.235696 kubelet[2666]: E0414 01:11:27.235665 2666 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 14 01:11:27.235729 kubelet[2666]: E0414 01:11:27.235708 2666 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-858fc86974-6fm4q" Apr 14 01:11:27.235729 kubelet[2666]: E0414 01:11:27.235724 2666 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-858fc86974-6fm4q" Apr 14 01:11:27.235775 kubelet[2666]: E0414 01:11:27.235755 2666 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-858fc86974-6fm4q_calico-system(be5517b6-6d7c-4af7-8c09-ffaa013ba114)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-858fc86974-6fm4q_calico-system(be5517b6-6d7c-4af7-8c09-ffaa013ba114)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-858fc86974-6fm4q" podUID="be5517b6-6d7c-4af7-8c09-ffaa013ba114" Apr 14 01:11:27.246661 containerd[1584]: time="2026-04-14T01:11:27.246586463Z" level=info msg="StartContainer for \"15b962e5056b1c4b436b176424227ab3c03c6e6a84e742450471c54d1f2ab9be\" returns successfully" Apr 14 01:11:27.829093 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb-shm.mount: Deactivated successfully. Apr 14 01:11:28.060542 kubelet[2666]: I0414 01:11:28.060401 2666 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Apr 14 01:11:28.061397 containerd[1584]: time="2026-04-14T01:11:28.061226277Z" level=info msg="StopPodSandbox for \"c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18\"" Apr 14 01:11:28.061781 containerd[1584]: time="2026-04-14T01:11:28.061512500Z" level=info msg="Ensure that sandbox c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18 in task-service has been cleanup successfully" Apr 14 01:11:28.062856 kubelet[2666]: I0414 01:11:28.062809 2666 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Apr 14 01:11:28.063693 containerd[1584]: time="2026-04-14T01:11:28.063606562Z" level=info msg="StopPodSandbox for \"6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d\"" Apr 14 01:11:28.063752 kubelet[2666]: I0414 01:11:28.063688 2666 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Apr 14 01:11:28.064163 containerd[1584]: time="2026-04-14T01:11:28.063970242Z" level=info msg="Ensure that sandbox 6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d in task-service has been cleanup successfully" Apr 14 01:11:28.064163 containerd[1584]: time="2026-04-14T01:11:28.064024037Z" level=info msg="StopPodSandbox for \"1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3\"" Apr 14 01:11:28.064163 containerd[1584]: time="2026-04-14T01:11:28.064140107Z" level=info msg="Ensure that sandbox 1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3 in task-service has been cleanup successfully" Apr 14 01:11:28.068873 kubelet[2666]: I0414 01:11:28.067899 2666 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Apr 14 01:11:28.069566 containerd[1584]: time="2026-04-14T01:11:28.069305254Z" level=info msg="StopPodSandbox for \"4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854\"" Apr 14 01:11:28.069755 containerd[1584]: time="2026-04-14T01:11:28.069665148Z" level=info msg="Ensure that sandbox 4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854 in task-service has been cleanup successfully" Apr 14 01:11:28.084667 kubelet[2666]: I0414 01:11:28.082297 2666 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Apr 14 01:11:28.088046 containerd[1584]: time="2026-04-14T01:11:28.088008900Z" level=info msg="StopPodSandbox for \"806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab\"" Apr 14 01:11:28.088671 containerd[1584]: time="2026-04-14T01:11:28.088450646Z" level=info msg="Ensure that sandbox 806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab in task-service has been cleanup successfully" Apr 14 01:11:28.097664 kubelet[2666]: I0414 01:11:28.097570 2666 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Apr 14 01:11:28.098107 containerd[1584]: time="2026-04-14T01:11:28.098088140Z" level=info msg="StopPodSandbox for \"a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7\"" Apr 14 01:11:28.098387 containerd[1584]: time="2026-04-14T01:11:28.098372602Z" level=info msg="Ensure that sandbox a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7 in task-service has been cleanup successfully" Apr 14 01:11:28.099590 kubelet[2666]: I0414 01:11:28.099471 2666 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Apr 14 01:11:28.101593 containerd[1584]: time="2026-04-14T01:11:28.100277928Z" level=info msg="StopPodSandbox for \"b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630\"" Apr 14 01:11:28.102108 containerd[1584]: time="2026-04-14T01:11:28.102093835Z" level=info msg="Ensure that sandbox b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630 in task-service has been cleanup successfully" Apr 14 01:11:28.119385 kubelet[2666]: I0414 01:11:28.118710 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dlffs" podStartSLOduration=3.397555398 podStartE2EDuration="22.118694427s" podCreationTimestamp="2026-04-14 01:11:06 +0000 UTC" firstStartedPulling="2026-04-14 01:11:07.075922037 +0000 UTC m=+17.320800315" lastFinishedPulling="2026-04-14 01:11:25.797061059 +0000 UTC m=+36.041939344" observedRunningTime="2026-04-14 01:11:28.117504548 +0000 UTC m=+38.362382831" watchObservedRunningTime="2026-04-14 01:11:28.118694427 +0000 UTC m=+38.363572714" Apr 14 01:11:28.371637 containerd[1584]: 2026-04-14 01:11:28.275 [INFO][4009] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Apr 14 01:11:28.371637 containerd[1584]: 2026-04-14 01:11:28.276 [INFO][4009] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" iface="eth0" netns="/var/run/netns/cni-3ada4ab3-2ac9-3b7b-6019-92a95192ff4d" Apr 14 01:11:28.371637 containerd[1584]: 2026-04-14 01:11:28.276 [INFO][4009] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" iface="eth0" netns="/var/run/netns/cni-3ada4ab3-2ac9-3b7b-6019-92a95192ff4d" Apr 14 01:11:28.371637 containerd[1584]: 2026-04-14 01:11:28.278 [INFO][4009] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" iface="eth0" netns="/var/run/netns/cni-3ada4ab3-2ac9-3b7b-6019-92a95192ff4d" Apr 14 01:11:28.371637 containerd[1584]: 2026-04-14 01:11:28.278 [INFO][4009] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Apr 14 01:11:28.371637 containerd[1584]: 2026-04-14 01:11:28.278 [INFO][4009] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Apr 14 01:11:28.371637 containerd[1584]: 2026-04-14 01:11:28.345 [INFO][4100] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" HandleID="k8s-pod-network.806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Workload="localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0" Apr 14 01:11:28.371637 containerd[1584]: 2026-04-14 01:11:28.348 [INFO][4100] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:28.371637 containerd[1584]: 2026-04-14 01:11:28.348 [INFO][4100] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:28.371637 containerd[1584]: 2026-04-14 01:11:28.356 [WARNING][4100] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" HandleID="k8s-pod-network.806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Workload="localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0" Apr 14 01:11:28.371637 containerd[1584]: 2026-04-14 01:11:28.356 [INFO][4100] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" HandleID="k8s-pod-network.806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Workload="localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0" Apr 14 01:11:28.371637 containerd[1584]: 2026-04-14 01:11:28.358 [INFO][4100] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:28.371637 containerd[1584]: 2026-04-14 01:11:28.365 [INFO][4009] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Apr 14 01:11:28.372987 containerd[1584]: time="2026-04-14T01:11:28.372909491Z" level=info msg="TearDown network for sandbox \"806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab\" successfully" Apr 14 01:11:28.374479 systemd[1]: run-netns-cni\x2d3ada4ab3\x2d2ac9\x2d3b7b\x2d6019\x2d92a95192ff4d.mount: Deactivated successfully. Apr 14 01:11:28.375420 containerd[1584]: time="2026-04-14T01:11:28.375400031Z" level=info msg="StopPodSandbox for \"806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab\" returns successfully" Apr 14 01:11:28.376225 kubelet[2666]: E0414 01:11:28.376062 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:28.376850 containerd[1584]: time="2026-04-14T01:11:28.376831556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pdxgp,Uid:5c3519a7-25ed-4cbf-8b3f-53ccd47f87a7,Namespace:kube-system,Attempt:1,}" Apr 14 01:11:28.383410 containerd[1584]: 2026-04-14 01:11:28.276 [INFO][4024] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Apr 14 01:11:28.383410 containerd[1584]: 2026-04-14 01:11:28.276 [INFO][4024] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" iface="eth0" netns="/var/run/netns/cni-a10501d2-ae0b-8dcb-59a7-f7e1181e2440" Apr 14 01:11:28.383410 containerd[1584]: 2026-04-14 01:11:28.277 [INFO][4024] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" iface="eth0" netns="/var/run/netns/cni-a10501d2-ae0b-8dcb-59a7-f7e1181e2440" Apr 14 01:11:28.383410 containerd[1584]: 2026-04-14 01:11:28.277 [INFO][4024] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" iface="eth0" netns="/var/run/netns/cni-a10501d2-ae0b-8dcb-59a7-f7e1181e2440" Apr 14 01:11:28.383410 containerd[1584]: 2026-04-14 01:11:28.277 [INFO][4024] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Apr 14 01:11:28.383410 containerd[1584]: 2026-04-14 01:11:28.277 [INFO][4024] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Apr 14 01:11:28.383410 containerd[1584]: 2026-04-14 01:11:28.348 [INFO][4097] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" HandleID="k8s-pod-network.a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Workload="localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0" Apr 14 01:11:28.383410 containerd[1584]: 2026-04-14 01:11:28.349 [INFO][4097] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:28.383410 containerd[1584]: 2026-04-14 01:11:28.358 [INFO][4097] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:28.383410 containerd[1584]: 2026-04-14 01:11:28.369 [WARNING][4097] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" HandleID="k8s-pod-network.a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Workload="localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0" Apr 14 01:11:28.383410 containerd[1584]: 2026-04-14 01:11:28.369 [INFO][4097] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" HandleID="k8s-pod-network.a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Workload="localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0" Apr 14 01:11:28.383410 containerd[1584]: 2026-04-14 01:11:28.376 [INFO][4097] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:28.383410 containerd[1584]: 2026-04-14 01:11:28.381 [INFO][4024] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Apr 14 01:11:28.385882 systemd[1]: run-netns-cni\x2da10501d2\x2dae0b\x2d8dcb\x2d59a7\x2df7e1181e2440.mount: Deactivated successfully. Apr 14 01:11:28.386545 containerd[1584]: time="2026-04-14T01:11:28.386438475Z" level=info msg="TearDown network for sandbox \"a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7\" successfully" Apr 14 01:11:28.386545 containerd[1584]: time="2026-04-14T01:11:28.386469411Z" level=info msg="StopPodSandbox for \"a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7\" returns successfully" Apr 14 01:11:28.386702 kubelet[2666]: E0414 01:11:28.386674 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:28.388161 containerd[1584]: time="2026-04-14T01:11:28.388115568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9cp8,Uid:d332dd42-09a8-4567-aca9-70ecff2b60fc,Namespace:kube-system,Attempt:1,}" Apr 14 01:11:28.418743 containerd[1584]: 2026-04-14 01:11:28.243 [INFO][3976] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Apr 14 01:11:28.418743 containerd[1584]: 2026-04-14 01:11:28.252 [INFO][3976] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" iface="eth0" netns="/var/run/netns/cni-8b07598a-f55e-288a-4f24-79204d463731" Apr 14 01:11:28.418743 containerd[1584]: 2026-04-14 01:11:28.253 [INFO][3976] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" iface="eth0" netns="/var/run/netns/cni-8b07598a-f55e-288a-4f24-79204d463731" Apr 14 01:11:28.418743 containerd[1584]: 2026-04-14 01:11:28.254 [INFO][3976] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" iface="eth0" netns="/var/run/netns/cni-8b07598a-f55e-288a-4f24-79204d463731" Apr 14 01:11:28.418743 containerd[1584]: 2026-04-14 01:11:28.254 [INFO][3976] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Apr 14 01:11:28.418743 containerd[1584]: 2026-04-14 01:11:28.254 [INFO][3976] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Apr 14 01:11:28.418743 containerd[1584]: 2026-04-14 01:11:28.363 [INFO][4082] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" HandleID="k8s-pod-network.4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Workload="localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0" Apr 14 01:11:28.418743 containerd[1584]: 2026-04-14 01:11:28.363 [INFO][4082] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:28.418743 containerd[1584]: 2026-04-14 01:11:28.378 [INFO][4082] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:28.418743 containerd[1584]: 2026-04-14 01:11:28.401 [WARNING][4082] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" HandleID="k8s-pod-network.4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Workload="localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0" Apr 14 01:11:28.418743 containerd[1584]: 2026-04-14 01:11:28.401 [INFO][4082] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" HandleID="k8s-pod-network.4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Workload="localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0" Apr 14 01:11:28.418743 containerd[1584]: 2026-04-14 01:11:28.406 [INFO][4082] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:28.418743 containerd[1584]: 2026-04-14 01:11:28.409 [INFO][3976] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Apr 14 01:11:28.419471 containerd[1584]: time="2026-04-14T01:11:28.419416321Z" level=info msg="TearDown network for sandbox \"4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854\" successfully" Apr 14 01:11:28.419471 containerd[1584]: time="2026-04-14T01:11:28.419451384Z" level=info msg="StopPodSandbox for \"4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854\" returns successfully" Apr 14 01:11:28.420714 containerd[1584]: time="2026-04-14T01:11:28.420645409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-858fc86974-lgjnc,Uid:8cfb8838-233c-4923-a3d1-211c57385c00,Namespace:calico-system,Attempt:1,}" Apr 14 01:11:28.433774 containerd[1584]: 2026-04-14 01:11:28.243 [INFO][3961] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Apr 14 01:11:28.433774 containerd[1584]: 2026-04-14 01:11:28.247 [INFO][3961] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" iface="eth0" netns="/var/run/netns/cni-94b0572e-5134-c015-5ef2-8ef0f9dee4f6" Apr 14 01:11:28.433774 containerd[1584]: 2026-04-14 01:11:28.248 [INFO][3961] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" iface="eth0" netns="/var/run/netns/cni-94b0572e-5134-c015-5ef2-8ef0f9dee4f6" Apr 14 01:11:28.433774 containerd[1584]: 2026-04-14 01:11:28.250 [INFO][3961] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" iface="eth0" netns="/var/run/netns/cni-94b0572e-5134-c015-5ef2-8ef0f9dee4f6" Apr 14 01:11:28.433774 containerd[1584]: 2026-04-14 01:11:28.251 [INFO][3961] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Apr 14 01:11:28.433774 containerd[1584]: 2026-04-14 01:11:28.251 [INFO][3961] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Apr 14 01:11:28.433774 containerd[1584]: 2026-04-14 01:11:28.375 [INFO][4075] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" HandleID="k8s-pod-network.c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Workload="localhost-k8s-goldmane--5b85766d88--gpjhg-eth0" Apr 14 01:11:28.433774 containerd[1584]: 2026-04-14 01:11:28.380 [INFO][4075] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:28.433774 containerd[1584]: 2026-04-14 01:11:28.406 [INFO][4075] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:28.433774 containerd[1584]: 2026-04-14 01:11:28.417 [WARNING][4075] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" HandleID="k8s-pod-network.c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Workload="localhost-k8s-goldmane--5b85766d88--gpjhg-eth0" Apr 14 01:11:28.433774 containerd[1584]: 2026-04-14 01:11:28.417 [INFO][4075] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" HandleID="k8s-pod-network.c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Workload="localhost-k8s-goldmane--5b85766d88--gpjhg-eth0" Apr 14 01:11:28.433774 containerd[1584]: 2026-04-14 01:11:28.427 [INFO][4075] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:28.433774 containerd[1584]: 2026-04-14 01:11:28.432 [INFO][3961] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Apr 14 01:11:28.434222 containerd[1584]: time="2026-04-14T01:11:28.433966310Z" level=info msg="TearDown network for sandbox \"c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18\" successfully" Apr 14 01:11:28.434222 containerd[1584]: time="2026-04-14T01:11:28.434055969Z" level=info msg="StopPodSandbox for \"c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18\" returns successfully" Apr 14 01:11:28.435905 containerd[1584]: time="2026-04-14T01:11:28.435767700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-gpjhg,Uid:c6df9bea-e3c6-4c2c-952c-aaf1341b5033,Namespace:calico-system,Attempt:1,}" Apr 14 01:11:28.452590 containerd[1584]: 2026-04-14 01:11:28.238 [INFO][3963] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Apr 14 01:11:28.452590 containerd[1584]: 2026-04-14 01:11:28.250 [INFO][3963] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" iface="eth0" netns="/var/run/netns/cni-228e5d61-dbd6-2243-dea4-b5ee64968ee9" Apr 14 01:11:28.452590 containerd[1584]: 2026-04-14 01:11:28.250 [INFO][3963] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" iface="eth0" netns="/var/run/netns/cni-228e5d61-dbd6-2243-dea4-b5ee64968ee9" Apr 14 01:11:28.452590 containerd[1584]: 2026-04-14 01:11:28.251 [INFO][3963] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" iface="eth0" netns="/var/run/netns/cni-228e5d61-dbd6-2243-dea4-b5ee64968ee9" Apr 14 01:11:28.452590 containerd[1584]: 2026-04-14 01:11:28.251 [INFO][3963] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Apr 14 01:11:28.452590 containerd[1584]: 2026-04-14 01:11:28.251 [INFO][3963] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Apr 14 01:11:28.452590 containerd[1584]: 2026-04-14 01:11:28.376 [INFO][4076] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" HandleID="k8s-pod-network.6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Workload="localhost-k8s-whisker--cf6f98489--m9nv8-eth0" Apr 14 01:11:28.452590 containerd[1584]: 2026-04-14 01:11:28.387 [INFO][4076] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:28.452590 containerd[1584]: 2026-04-14 01:11:28.425 [INFO][4076] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:28.452590 containerd[1584]: 2026-04-14 01:11:28.437 [WARNING][4076] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" HandleID="k8s-pod-network.6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Workload="localhost-k8s-whisker--cf6f98489--m9nv8-eth0" Apr 14 01:11:28.452590 containerd[1584]: 2026-04-14 01:11:28.437 [INFO][4076] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" HandleID="k8s-pod-network.6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Workload="localhost-k8s-whisker--cf6f98489--m9nv8-eth0" Apr 14 01:11:28.452590 containerd[1584]: 2026-04-14 01:11:28.441 [INFO][4076] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:28.452590 containerd[1584]: 2026-04-14 01:11:28.449 [INFO][3963] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Apr 14 01:11:28.453250 containerd[1584]: time="2026-04-14T01:11:28.453228057Z" level=info msg="TearDown network for sandbox \"6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d\" successfully" Apr 14 01:11:28.453305 containerd[1584]: time="2026-04-14T01:11:28.453297722Z" level=info msg="StopPodSandbox for \"6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d\" returns successfully" Apr 14 01:11:28.464051 containerd[1584]: 2026-04-14 01:11:28.266 [INFO][3956] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Apr 14 01:11:28.464051 containerd[1584]: 2026-04-14 01:11:28.267 [INFO][3956] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" iface="eth0" netns="/var/run/netns/cni-6337346a-d839-8c5e-394d-42ef7212f3d2" Apr 14 01:11:28.464051 containerd[1584]: 2026-04-14 01:11:28.268 [INFO][3956] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" iface="eth0" netns="/var/run/netns/cni-6337346a-d839-8c5e-394d-42ef7212f3d2" Apr 14 01:11:28.464051 containerd[1584]: 2026-04-14 01:11:28.269 [INFO][3956] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" iface="eth0" netns="/var/run/netns/cni-6337346a-d839-8c5e-394d-42ef7212f3d2" Apr 14 01:11:28.464051 containerd[1584]: 2026-04-14 01:11:28.271 [INFO][3956] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Apr 14 01:11:28.464051 containerd[1584]: 2026-04-14 01:11:28.271 [INFO][3956] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Apr 14 01:11:28.464051 containerd[1584]: 2026-04-14 01:11:28.393 [INFO][4091] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" HandleID="k8s-pod-network.1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Workload="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0" Apr 14 01:11:28.464051 containerd[1584]: 2026-04-14 01:11:28.394 [INFO][4091] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:28.464051 containerd[1584]: 2026-04-14 01:11:28.442 [INFO][4091] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:28.464051 containerd[1584]: 2026-04-14 01:11:28.453 [WARNING][4091] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" HandleID="k8s-pod-network.1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Workload="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0" Apr 14 01:11:28.464051 containerd[1584]: 2026-04-14 01:11:28.453 [INFO][4091] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" HandleID="k8s-pod-network.1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Workload="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0" Apr 14 01:11:28.464051 containerd[1584]: 2026-04-14 01:11:28.455 [INFO][4091] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:28.464051 containerd[1584]: 2026-04-14 01:11:28.458 [INFO][3956] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Apr 14 01:11:28.464655 containerd[1584]: time="2026-04-14T01:11:28.464636202Z" level=info msg="TearDown network for sandbox \"1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3\" successfully" Apr 14 01:11:28.464703 containerd[1584]: time="2026-04-14T01:11:28.464696752Z" level=info msg="StopPodSandbox for \"1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3\" returns successfully" Apr 14 01:11:28.465529 containerd[1584]: time="2026-04-14T01:11:28.465511581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69f7f46b8c-2nl2l,Uid:0e971146-3f82-4810-b7e0-9307354ac58e,Namespace:calico-system,Attempt:1,}" Apr 14 01:11:28.489698 containerd[1584]: 2026-04-14 01:11:28.317 [INFO][4029] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Apr 14 01:11:28.489698 containerd[1584]: 2026-04-14 01:11:28.317 [INFO][4029] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" iface="eth0" netns="/var/run/netns/cni-c9d748de-d75a-da18-cc7c-c252a47050df" Apr 14 01:11:28.489698 containerd[1584]: 2026-04-14 01:11:28.319 [INFO][4029] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" iface="eth0" netns="/var/run/netns/cni-c9d748de-d75a-da18-cc7c-c252a47050df" Apr 14 01:11:28.489698 containerd[1584]: 2026-04-14 01:11:28.319 [INFO][4029] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" iface="eth0" netns="/var/run/netns/cni-c9d748de-d75a-da18-cc7c-c252a47050df" Apr 14 01:11:28.489698 containerd[1584]: 2026-04-14 01:11:28.319 [INFO][4029] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Apr 14 01:11:28.489698 containerd[1584]: 2026-04-14 01:11:28.319 [INFO][4029] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Apr 14 01:11:28.489698 containerd[1584]: 2026-04-14 01:11:28.403 [INFO][4113] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" HandleID="k8s-pod-network.b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Workload="localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0" Apr 14 01:11:28.489698 containerd[1584]: 2026-04-14 01:11:28.404 [INFO][4113] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:28.489698 containerd[1584]: 2026-04-14 01:11:28.457 [INFO][4113] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:28.489698 containerd[1584]: 2026-04-14 01:11:28.473 [WARNING][4113] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" HandleID="k8s-pod-network.b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Workload="localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0" Apr 14 01:11:28.489698 containerd[1584]: 2026-04-14 01:11:28.473 [INFO][4113] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" HandleID="k8s-pod-network.b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Workload="localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0" Apr 14 01:11:28.489698 containerd[1584]: 2026-04-14 01:11:28.478 [INFO][4113] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:28.489698 containerd[1584]: 2026-04-14 01:11:28.484 [INFO][4029] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Apr 14 01:11:28.490694 containerd[1584]: time="2026-04-14T01:11:28.490516202Z" level=info msg="TearDown network for sandbox \"b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630\" successfully" Apr 14 01:11:28.490694 containerd[1584]: time="2026-04-14T01:11:28.490545212Z" level=info msg="StopPodSandbox for \"b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630\" returns successfully" Apr 14 01:11:28.494735 containerd[1584]: time="2026-04-14T01:11:28.494706127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-858fc86974-6fm4q,Uid:be5517b6-6d7c-4af7-8c09-ffaa013ba114,Namespace:calico-system,Attempt:1,}" Apr 14 01:11:28.530215 kubelet[2666]: I0414 01:11:28.529851 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad-nginx-config\") pod \"3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad\" (UID: \"3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad\") " Apr 14 01:11:28.530215 kubelet[2666]: I0414 01:11:28.529920 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad-whisker-ca-bundle\") pod \"3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad\" (UID: \"3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad\") " Apr 14 01:11:28.530215 kubelet[2666]: I0414 01:11:28.529947 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad-whisker-backend-key-pair\") pod \"3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad\" (UID: \"3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad\") " Apr 14 01:11:28.530215 kubelet[2666]: I0414 01:11:28.529963 2666 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bkjr\" (UniqueName: \"kubernetes.io/projected/3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad-kube-api-access-5bkjr\") pod \"3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad\" (UID: \"3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad\") " Apr 14 01:11:28.532459 kubelet[2666]: I0414 01:11:28.531478 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad" (UID: "3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 14 01:11:28.532459 kubelet[2666]: I0414 01:11:28.531622 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad" (UID: "3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 14 01:11:28.534970 kubelet[2666]: I0414 01:11:28.534790 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad-kube-api-access-5bkjr" (OuterVolumeSpecName: "kube-api-access-5bkjr") pod "3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad" (UID: "3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad"). InnerVolumeSpecName "kube-api-access-5bkjr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 14 01:11:28.535557 kubelet[2666]: I0414 01:11:28.535483 2666 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad" (UID: "3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 14 01:11:28.633255 kubelet[2666]: I0414 01:11:28.631379 2666 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5bkjr\" (UniqueName: \"kubernetes.io/projected/3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad-kube-api-access-5bkjr\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:28.634447 kubelet[2666]: I0414 01:11:28.634090 2666 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad-nginx-config\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:28.634447 kubelet[2666]: I0414 01:11:28.634113 2666 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:28.634447 kubelet[2666]: I0414 01:11:28.634125 2666 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Apr 14 01:11:28.678845 systemd-networkd[1244]: cali41b67747784: Link UP Apr 14 01:11:28.679523 systemd-networkd[1244]: cali41b67747784: Gained carrier Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.487 [ERROR][4135] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.505 [INFO][4135] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0 coredns-674b8bbfcf- kube-system 5c3519a7-25ed-4cbf-8b3f-53ccd47f87a7 1012 0 2026-04-14 01:10:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-pdxgp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali41b67747784 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4" Namespace="kube-system" Pod="coredns-674b8bbfcf-pdxgp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pdxgp-" Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.505 [INFO][4135] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4" Namespace="kube-system" Pod="coredns-674b8bbfcf-pdxgp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0" Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.577 [INFO][4218] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4" HandleID="k8s-pod-network.6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4" Workload="localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0" Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.592 [INFO][4218] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4" HandleID="k8s-pod-network.6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4" Workload="localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e78f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-pdxgp", "timestamp":"2026-04-14 01:11:28.577071565 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00036b1e0)} Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.592 [INFO][4218] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.592 [INFO][4218] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.592 [INFO][4218] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.598 [INFO][4218] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4" host="localhost" Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.609 [INFO][4218] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.622 [INFO][4218] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.627 [INFO][4218] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.630 [INFO][4218] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.630 [INFO][4218] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4" host="localhost" Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.635 [INFO][4218] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4 Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.644 [INFO][4218] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4" host="localhost" Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.658 [INFO][4218] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4" host="localhost" Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.661 [INFO][4218] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4" host="localhost" Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.661 [INFO][4218] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:28.724937 containerd[1584]: 2026-04-14 01:11:28.661 [INFO][4218] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4" HandleID="k8s-pod-network.6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4" Workload="localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0" Apr 14 01:11:28.725837 containerd[1584]: 2026-04-14 01:11:28.665 [INFO][4135] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4" Namespace="kube-system" Pod="coredns-674b8bbfcf-pdxgp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5c3519a7-25ed-4cbf-8b3f-53ccd47f87a7", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-pdxgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41b67747784", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:28.725837 containerd[1584]: 2026-04-14 01:11:28.666 [INFO][4135] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4" Namespace="kube-system" Pod="coredns-674b8bbfcf-pdxgp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0" Apr 14 01:11:28.725837 containerd[1584]: 2026-04-14 01:11:28.666 [INFO][4135] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41b67747784 ContainerID="6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4" Namespace="kube-system" Pod="coredns-674b8bbfcf-pdxgp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0" Apr 14 01:11:28.725837 containerd[1584]: 2026-04-14 01:11:28.682 [INFO][4135] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4" Namespace="kube-system" Pod="coredns-674b8bbfcf-pdxgp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0" Apr 14 01:11:28.725837 containerd[1584]: 2026-04-14 01:11:28.682 [INFO][4135] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4" Namespace="kube-system" Pod="coredns-674b8bbfcf-pdxgp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5c3519a7-25ed-4cbf-8b3f-53ccd47f87a7", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4", Pod:"coredns-674b8bbfcf-pdxgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41b67747784", MAC:"a2:ce:74:02:10:e8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:28.725837 containerd[1584]: 2026-04-14 01:11:28.698 [INFO][4135] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4" Namespace="kube-system" Pod="coredns-674b8bbfcf-pdxgp" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0" Apr 14 01:11:28.778550 containerd[1584]: time="2026-04-14T01:11:28.777586121Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:11:28.778550 containerd[1584]: time="2026-04-14T01:11:28.777635026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:11:28.778550 containerd[1584]: time="2026-04-14T01:11:28.777643809Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:28.778550 containerd[1584]: time="2026-04-14T01:11:28.777713775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:28.846965 systemd[1]: run-netns-cni\x2d94b0572e\x2d5134\x2dc015\x2d5ef2\x2d8ef0f9dee4f6.mount: Deactivated successfully. Apr 14 01:11:28.847141 systemd[1]: run-netns-cni\x2d6337346a\x2dd839\x2d8c5e\x2d394d\x2d42ef7212f3d2.mount: Deactivated successfully. Apr 14 01:11:28.847303 systemd[1]: run-netns-cni\x2dc9d748de\x2dd75a\x2dda18\x2dcc7c\x2dc252a47050df.mount: Deactivated successfully. Apr 14 01:11:28.847534 systemd[1]: run-netns-cni\x2d228e5d61\x2ddbd6\x2d2243\x2ddea4\x2db5ee64968ee9.mount: Deactivated successfully. Apr 14 01:11:28.847610 systemd[1]: run-netns-cni\x2d8b07598a\x2df55e\x2d288a\x2d4f24\x2d79204d463731.mount: Deactivated successfully. Apr 14 01:11:28.847686 systemd[1]: var-lib-kubelet-pods-3e12e9e0\x2df3ab\x2d4bbd\x2da2fd\x2d5a98701989ad-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5bkjr.mount: Deactivated successfully. Apr 14 01:11:28.847769 systemd[1]: var-lib-kubelet-pods-3e12e9e0\x2df3ab\x2d4bbd\x2da2fd\x2d5a98701989ad-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 14 01:11:28.890630 systemd-networkd[1244]: cali492ccbaaeac: Link UP Apr 14 01:11:28.890728 systemd-networkd[1244]: cali492ccbaaeac: Gained carrier Apr 14 01:11:28.917292 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.480 [ERROR][4133] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.505 [INFO][4133] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0 coredns-674b8bbfcf- kube-system d332dd42-09a8-4567-aca9-70ecff2b60fc 1011 0 2026-04-14 01:10:56 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-t9cp8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali492ccbaaeac [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9cp8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t9cp8-" Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.505 [INFO][4133] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9cp8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0" Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.600 [INFO][4217] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518" HandleID="k8s-pod-network.61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518" Workload="localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0" Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.612 [INFO][4217] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518" HandleID="k8s-pod-network.61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518" Workload="localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e1a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-t9cp8", "timestamp":"2026-04-14 01:11:28.600514961 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00020e000)} Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.613 [INFO][4217] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.663 [INFO][4217] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.663 [INFO][4217] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.697 [INFO][4217] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518" host="localhost" Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.712 [INFO][4217] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.727 [INFO][4217] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.772 [INFO][4217] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.803 [INFO][4217] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.804 [INFO][4217] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518" host="localhost" Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.817 [INFO][4217] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518 Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.846 [INFO][4217] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518" host="localhost" Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.868 [INFO][4217] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518" host="localhost" Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.874 [INFO][4217] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518" host="localhost" Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.874 [INFO][4217] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:28.943593 containerd[1584]: 2026-04-14 01:11:28.874 [INFO][4217] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518" HandleID="k8s-pod-network.61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518" Workload="localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0" Apr 14 01:11:28.951016 containerd[1584]: 2026-04-14 01:11:28.885 [INFO][4133] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9cp8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d332dd42-09a8-4567-aca9-70ecff2b60fc", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-t9cp8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali492ccbaaeac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:28.951016 containerd[1584]: 2026-04-14 01:11:28.885 [INFO][4133] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9cp8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0" Apr 14 01:11:28.951016 containerd[1584]: 2026-04-14 01:11:28.885 [INFO][4133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali492ccbaaeac ContainerID="61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9cp8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0" Apr 14 01:11:28.951016 containerd[1584]: 2026-04-14 01:11:28.890 [INFO][4133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9cp8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0" Apr 14 01:11:28.951016 containerd[1584]: 2026-04-14 01:11:28.890 [INFO][4133] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9cp8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d332dd42-09a8-4567-aca9-70ecff2b60fc", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518", Pod:"coredns-674b8bbfcf-t9cp8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali492ccbaaeac", MAC:"62:cd:05:20:dc:f0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:28.951016 containerd[1584]: 2026-04-14 01:11:28.932 [INFO][4133] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9cp8" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0" Apr 14 01:11:29.014573 containerd[1584]: time="2026-04-14T01:11:29.010567800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pdxgp,Uid:5c3519a7-25ed-4cbf-8b3f-53ccd47f87a7,Namespace:kube-system,Attempt:1,} returns sandbox id \"6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4\"" Apr 14 01:11:29.014760 kubelet[2666]: E0414 01:11:29.012544 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:29.032046 containerd[1584]: time="2026-04-14T01:11:29.029752026Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:11:29.032046 containerd[1584]: time="2026-04-14T01:11:29.031464594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:11:29.032046 containerd[1584]: time="2026-04-14T01:11:29.031480187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:29.032046 containerd[1584]: time="2026-04-14T01:11:29.031828058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:29.047429 containerd[1584]: time="2026-04-14T01:11:29.044314586Z" level=info msg="CreateContainer within sandbox \"6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 01:11:29.098130 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 01:11:29.157380 containerd[1584]: time="2026-04-14T01:11:29.157086044Z" level=info msg="CreateContainer within sandbox \"6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d2c997caadcdc5ebc2d6909fc16e5f36c2b084e8d655f73bfe41e8d39a67c7e8\"" Apr 14 01:11:29.160586 containerd[1584]: time="2026-04-14T01:11:29.157965381Z" level=info msg="StartContainer for \"d2c997caadcdc5ebc2d6909fc16e5f36c2b084e8d655f73bfe41e8d39a67c7e8\"" Apr 14 01:11:29.179069 systemd-networkd[1244]: cali4fbea8dda84: Link UP Apr 14 01:11:29.180021 systemd-networkd[1244]: cali4fbea8dda84: Gained carrier Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:28.537 [ERROR][4168] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:28.566 [INFO][4168] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--5b85766d88--gpjhg-eth0 goldmane-5b85766d88- calico-system c6df9bea-e3c6-4c2c-952c-aaf1341b5033 1009 0 2026-04-14 01:11:06 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-5b85766d88-gpjhg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4fbea8dda84 [] [] }} ContainerID="d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891" Namespace="calico-system" Pod="goldmane-5b85766d88-gpjhg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--gpjhg-" Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:28.566 [INFO][4168] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891" Namespace="calico-system" Pod="goldmane-5b85766d88-gpjhg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--gpjhg-eth0" Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:28.668 [INFO][4234] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891" HandleID="k8s-pod-network.d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891" Workload="localhost-k8s-goldmane--5b85766d88--gpjhg-eth0" Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:28.675 [INFO][4234] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891" HandleID="k8s-pod-network.d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891" Workload="localhost-k8s-goldmane--5b85766d88--gpjhg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003af2f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-5b85766d88-gpjhg", "timestamp":"2026-04-14 01:11:28.668109313 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004e1ce0)} Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:28.676 [INFO][4234] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:28.877 [INFO][4234] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:28.877 [INFO][4234] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:28.894 [INFO][4234] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891" host="localhost" Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:28.954 [INFO][4234] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:29.024 [INFO][4234] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:29.049 [INFO][4234] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:29.064 [INFO][4234] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:29.064 [INFO][4234] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891" host="localhost" Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:29.075 [INFO][4234] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891 Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:29.107 [INFO][4234] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891" host="localhost" Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:29.124 [INFO][4234] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891" host="localhost" Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:29.128 [INFO][4234] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891" host="localhost" Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:29.131 [INFO][4234] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:29.273221 containerd[1584]: 2026-04-14 01:11:29.132 [INFO][4234] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891" HandleID="k8s-pod-network.d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891" Workload="localhost-k8s-goldmane--5b85766d88--gpjhg-eth0" Apr 14 01:11:29.275119 containerd[1584]: 2026-04-14 01:11:29.153 [INFO][4168] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891" Namespace="calico-system" Pod="goldmane-5b85766d88-gpjhg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--gpjhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--gpjhg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"c6df9bea-e3c6-4c2c-952c-aaf1341b5033", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-5b85766d88-gpjhg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4fbea8dda84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:29.275119 containerd[1584]: 2026-04-14 01:11:29.153 [INFO][4168] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891" Namespace="calico-system" Pod="goldmane-5b85766d88-gpjhg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--gpjhg-eth0" Apr 14 01:11:29.275119 containerd[1584]: 2026-04-14 01:11:29.153 [INFO][4168] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4fbea8dda84 ContainerID="d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891" Namespace="calico-system" Pod="goldmane-5b85766d88-gpjhg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--gpjhg-eth0" Apr 14 01:11:29.275119 containerd[1584]: 2026-04-14 01:11:29.180 [INFO][4168] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891" Namespace="calico-system" Pod="goldmane-5b85766d88-gpjhg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--gpjhg-eth0" Apr 14 01:11:29.275119 containerd[1584]: 2026-04-14 01:11:29.213 [INFO][4168] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891" Namespace="calico-system" Pod="goldmane-5b85766d88-gpjhg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--gpjhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--gpjhg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"c6df9bea-e3c6-4c2c-952c-aaf1341b5033", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891", Pod:"goldmane-5b85766d88-gpjhg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4fbea8dda84", MAC:"32:73:4c:06:97:e4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:29.275119 containerd[1584]: 2026-04-14 01:11:29.261 [INFO][4168] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891" Namespace="calico-system" Pod="goldmane-5b85766d88-gpjhg" WorkloadEndpoint="localhost-k8s-goldmane--5b85766d88--gpjhg-eth0" Apr 14 01:11:29.330384 containerd[1584]: time="2026-04-14T01:11:29.327824532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9cp8,Uid:d332dd42-09a8-4567-aca9-70ecff2b60fc,Namespace:kube-system,Attempt:1,} returns sandbox id \"61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518\"" Apr 14 01:11:29.341694 kubelet[2666]: E0414 01:11:29.338215 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:29.365758 kubelet[2666]: I0414 01:11:29.364784 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnb8h\" (UniqueName: \"kubernetes.io/projected/78f967c9-27f4-4564-b1a6-08e9da295149-kube-api-access-hnb8h\") pod \"whisker-b6b7f96f4-wbp4k\" (UID: \"78f967c9-27f4-4564-b1a6-08e9da295149\") " pod="calico-system/whisker-b6b7f96f4-wbp4k" Apr 14 01:11:29.365758 kubelet[2666]: I0414 01:11:29.364847 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78f967c9-27f4-4564-b1a6-08e9da295149-whisker-ca-bundle\") pod \"whisker-b6b7f96f4-wbp4k\" (UID: \"78f967c9-27f4-4564-b1a6-08e9da295149\") " pod="calico-system/whisker-b6b7f96f4-wbp4k" Apr 14 01:11:29.365758 kubelet[2666]: I0414 01:11:29.364869 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/78f967c9-27f4-4564-b1a6-08e9da295149-whisker-backend-key-pair\") pod \"whisker-b6b7f96f4-wbp4k\" (UID: \"78f967c9-27f4-4564-b1a6-08e9da295149\") " pod="calico-system/whisker-b6b7f96f4-wbp4k" Apr 14 01:11:29.365758 kubelet[2666]: I0414 01:11:29.364884 2666 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/78f967c9-27f4-4564-b1a6-08e9da295149-nginx-config\") pod \"whisker-b6b7f96f4-wbp4k\" (UID: \"78f967c9-27f4-4564-b1a6-08e9da295149\") " pod="calico-system/whisker-b6b7f96f4-wbp4k" Apr 14 01:11:29.373434 containerd[1584]: time="2026-04-14T01:11:29.373402210Z" level=info msg="CreateContainer within sandbox \"61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 14 01:11:29.400042 systemd-networkd[1244]: calia3ad035e267: Link UP Apr 14 01:11:29.413938 systemd-networkd[1244]: calia3ad035e267: Gained carrier Apr 14 01:11:29.417799 containerd[1584]: time="2026-04-14T01:11:29.416840066Z" level=info msg="CreateContainer within sandbox \"61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1bdfaf5323873cce231bfd9efc08c6b3f2dcf9695ff3c0fa6fc04ae46865cfaa\"" Apr 14 01:11:29.420807 containerd[1584]: time="2026-04-14T01:11:29.418360946Z" level=info msg="StartContainer for \"1bdfaf5323873cce231bfd9efc08c6b3f2dcf9695ff3c0fa6fc04ae46865cfaa\"" Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:28.559 [ERROR][4185] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:28.585 [INFO][4185] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0 calico-kube-controllers-69f7f46b8c- calico-system 0e971146-3f82-4810-b7e0-9307354ac58e 1010 0 2026-04-14 01:11:06 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69f7f46b8c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-69f7f46b8c-2nl2l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia3ad035e267 [] [] }} ContainerID="108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42" Namespace="calico-system" Pod="calico-kube-controllers-69f7f46b8c-2nl2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-" Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:28.585 [INFO][4185] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42" Namespace="calico-system" Pod="calico-kube-controllers-69f7f46b8c-2nl2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0" Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:28.661 [INFO][4249] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42" HandleID="k8s-pod-network.108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42" Workload="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0" Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:28.678 [INFO][4249] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42" HandleID="k8s-pod-network.108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42" Workload="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001adbf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-69f7f46b8c-2nl2l", "timestamp":"2026-04-14 01:11:28.661470558 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001f34a0)} Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:28.681 [INFO][4249] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:29.133 [INFO][4249] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:29.133 [INFO][4249] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:29.170 [INFO][4249] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42" host="localhost" Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:29.266 [INFO][4249] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:29.279 [INFO][4249] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:29.311 [INFO][4249] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:29.330 [INFO][4249] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:29.331 [INFO][4249] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42" host="localhost" Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:29.334 [INFO][4249] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42 Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:29.351 [INFO][4249] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42" host="localhost" Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:29.366 [INFO][4249] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42" host="localhost" Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:29.368 [INFO][4249] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42" host="localhost" Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:29.368 [INFO][4249] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:29.451291 containerd[1584]: 2026-04-14 01:11:29.368 [INFO][4249] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42" HandleID="k8s-pod-network.108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42" Workload="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0" Apr 14 01:11:29.451969 containerd[1584]: 2026-04-14 01:11:29.393 [INFO][4185] cni-plugin/k8s.go 418: Populated endpoint ContainerID="108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42" Namespace="calico-system" Pod="calico-kube-controllers-69f7f46b8c-2nl2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0", GenerateName:"calico-kube-controllers-69f7f46b8c-", Namespace:"calico-system", SelfLink:"", UID:"0e971146-3f82-4810-b7e0-9307354ac58e", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69f7f46b8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-69f7f46b8c-2nl2l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3ad035e267", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:29.451969 containerd[1584]: 2026-04-14 01:11:29.394 [INFO][4185] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42" Namespace="calico-system" Pod="calico-kube-controllers-69f7f46b8c-2nl2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0" Apr 14 01:11:29.451969 containerd[1584]: 2026-04-14 01:11:29.394 [INFO][4185] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3ad035e267 ContainerID="108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42" Namespace="calico-system" Pod="calico-kube-controllers-69f7f46b8c-2nl2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0" Apr 14 01:11:29.451969 containerd[1584]: 2026-04-14 01:11:29.402 [INFO][4185] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42" Namespace="calico-system" Pod="calico-kube-controllers-69f7f46b8c-2nl2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0" Apr 14 01:11:29.451969 containerd[1584]: 2026-04-14 01:11:29.404 [INFO][4185] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42" Namespace="calico-system" Pod="calico-kube-controllers-69f7f46b8c-2nl2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0", GenerateName:"calico-kube-controllers-69f7f46b8c-", Namespace:"calico-system", SelfLink:"", UID:"0e971146-3f82-4810-b7e0-9307354ac58e", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69f7f46b8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42", Pod:"calico-kube-controllers-69f7f46b8c-2nl2l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3ad035e267", MAC:"de:39:e1:36:66:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:29.451969 containerd[1584]: 2026-04-14 01:11:29.444 [INFO][4185] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42" Namespace="calico-system" Pod="calico-kube-controllers-69f7f46b8c-2nl2l" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0" Apr 14 01:11:29.508258 containerd[1584]: time="2026-04-14T01:11:29.506213164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:11:29.508258 containerd[1584]: time="2026-04-14T01:11:29.506267303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:11:29.508258 containerd[1584]: time="2026-04-14T01:11:29.506417522Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:29.508258 containerd[1584]: time="2026-04-14T01:11:29.507153259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:29.507364 systemd-networkd[1244]: cali9eeeaded672: Link UP Apr 14 01:11:29.509662 containerd[1584]: time="2026-04-14T01:11:29.509355407Z" level=info msg="StartContainer for \"d2c997caadcdc5ebc2d6909fc16e5f36c2b084e8d655f73bfe41e8d39a67c7e8\" returns successfully" Apr 14 01:11:29.509240 systemd-networkd[1244]: cali9eeeaded672: Gained carrier Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:28.524 [ERROR][4156] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:28.547 [INFO][4156] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0 calico-apiserver-858fc86974- calico-system 8cfb8838-233c-4923-a3d1-211c57385c00 1008 0 2026-04-14 01:11:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:858fc86974 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-858fc86974-lgjnc eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali9eeeaded672 [] [] }} ContainerID="c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41" Namespace="calico-system" Pod="calico-apiserver-858fc86974-lgjnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--858fc86974--lgjnc-" Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:28.547 [INFO][4156] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41" Namespace="calico-system" Pod="calico-apiserver-858fc86974-lgjnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0" Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:28.697 [INFO][4232] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41" HandleID="k8s-pod-network.c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41" Workload="localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0" Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:28.752 [INFO][4232] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41" HandleID="k8s-pod-network.c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41" Workload="localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e710), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-858fc86974-lgjnc", "timestamp":"2026-04-14 01:11:28.69752709 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003506e0)} Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:28.752 [INFO][4232] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:29.371 [INFO][4232] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:29.371 [INFO][4232] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:29.380 [INFO][4232] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41" host="localhost" Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:29.392 [INFO][4232] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:29.406 [INFO][4232] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:29.413 [INFO][4232] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:29.431 [INFO][4232] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:29.432 [INFO][4232] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41" host="localhost" Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:29.437 [INFO][4232] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41 Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:29.446 [INFO][4232] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41" host="localhost" Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:29.462 [INFO][4232] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41" host="localhost" Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:29.462 [INFO][4232] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41" host="localhost" Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:29.464 [INFO][4232] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:29.539972 containerd[1584]: 2026-04-14 01:11:29.464 [INFO][4232] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41" HandleID="k8s-pod-network.c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41" Workload="localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0" Apr 14 01:11:29.540543 containerd[1584]: 2026-04-14 01:11:29.478 [INFO][4156] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41" Namespace="calico-system" Pod="calico-apiserver-858fc86974-lgjnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0", GenerateName:"calico-apiserver-858fc86974-", Namespace:"calico-system", SelfLink:"", UID:"8cfb8838-233c-4923-a3d1-211c57385c00", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"858fc86974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-858fc86974-lgjnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9eeeaded672", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:29.540543 containerd[1584]: 2026-04-14 01:11:29.480 [INFO][4156] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41" Namespace="calico-system" Pod="calico-apiserver-858fc86974-lgjnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0" Apr 14 01:11:29.540543 containerd[1584]: 2026-04-14 01:11:29.481 [INFO][4156] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9eeeaded672 ContainerID="c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41" Namespace="calico-system" Pod="calico-apiserver-858fc86974-lgjnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0" Apr 14 01:11:29.540543 containerd[1584]: 2026-04-14 01:11:29.509 [INFO][4156] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41" Namespace="calico-system" Pod="calico-apiserver-858fc86974-lgjnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0" Apr 14 01:11:29.540543 containerd[1584]: 2026-04-14 01:11:29.512 [INFO][4156] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41" Namespace="calico-system" Pod="calico-apiserver-858fc86974-lgjnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0", GenerateName:"calico-apiserver-858fc86974-", Namespace:"calico-system", SelfLink:"", UID:"8cfb8838-233c-4923-a3d1-211c57385c00", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"858fc86974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41", Pod:"calico-apiserver-858fc86974-lgjnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9eeeaded672", MAC:"c6:1f:68:15:f8:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:29.540543 containerd[1584]: 2026-04-14 01:11:29.533 [INFO][4156] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41" Namespace="calico-system" Pod="calico-apiserver-858fc86974-lgjnc" WorkloadEndpoint="localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0" Apr 14 01:11:29.547309 containerd[1584]: time="2026-04-14T01:11:29.541916211Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:11:29.547309 containerd[1584]: time="2026-04-14T01:11:29.541962108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:11:29.547309 containerd[1584]: time="2026-04-14T01:11:29.541973506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:29.547309 containerd[1584]: time="2026-04-14T01:11:29.542042425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:29.619727 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 01:11:29.640383 containerd[1584]: time="2026-04-14T01:11:29.639363577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b6b7f96f4-wbp4k,Uid:78f967c9-27f4-4564-b1a6-08e9da295149,Namespace:calico-system,Attempt:0,}" Apr 14 01:11:29.643560 containerd[1584]: time="2026-04-14T01:11:29.643477340Z" level=info msg="StartContainer for \"1bdfaf5323873cce231bfd9efc08c6b3f2dcf9695ff3c0fa6fc04ae46865cfaa\" returns successfully" Apr 14 01:11:29.644914 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 01:11:29.671125 systemd-networkd[1244]: calia5e40ebc88d: Link UP Apr 14 01:11:29.673834 systemd-networkd[1244]: calia5e40ebc88d: Gained carrier Apr 14 01:11:29.679464 containerd[1584]: time="2026-04-14T01:11:29.674483084Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:11:29.681060 containerd[1584]: time="2026-04-14T01:11:29.680655973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:11:29.681060 containerd[1584]: time="2026-04-14T01:11:29.680711391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:29.681060 containerd[1584]: time="2026-04-14T01:11:29.680815189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:28.642 [ERROR][4199] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:28.679 [INFO][4199] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0 calico-apiserver-858fc86974- calico-system be5517b6-6d7c-4af7-8c09-ffaa013ba114 1013 0 2026-04-14 01:11:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:858fc86974 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-858fc86974-6fm4q eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calia5e40ebc88d [] [] }} ContainerID="83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9" Namespace="calico-system" Pod="calico-apiserver-858fc86974-6fm4q" WorkloadEndpoint="localhost-k8s-calico--apiserver--858fc86974--6fm4q-" Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:28.680 [INFO][4199] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9" Namespace="calico-system" Pod="calico-apiserver-858fc86974-6fm4q" WorkloadEndpoint="localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0" Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:28.777 [INFO][4271] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9" HandleID="k8s-pod-network.83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9" Workload="localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0" Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:28.801 [INFO][4271] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9" HandleID="k8s-pod-network.83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9" Workload="localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006c6340), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-apiserver-858fc86974-6fm4q", "timestamp":"2026-04-14 01:11:28.77715276 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000196000)} Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:28.801 [INFO][4271] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:29.463 [INFO][4271] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:29.463 [INFO][4271] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:29.487 [INFO][4271] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9" host="localhost" Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:29.514 [INFO][4271] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:29.574 [INFO][4271] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:29.588 [INFO][4271] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:29.594 [INFO][4271] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:29.595 [INFO][4271] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9" host="localhost" Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:29.604 [INFO][4271] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9 Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:29.616 [INFO][4271] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9" host="localhost" Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:29.632 [INFO][4271] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9" host="localhost" Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:29.633 [INFO][4271] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9" host="localhost" Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:29.633 [INFO][4271] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:29.719666 containerd[1584]: 2026-04-14 01:11:29.633 [INFO][4271] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9" HandleID="k8s-pod-network.83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9" Workload="localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0" Apr 14 01:11:29.720733 containerd[1584]: 2026-04-14 01:11:29.642 [INFO][4199] cni-plugin/k8s.go 418: Populated endpoint ContainerID="83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9" Namespace="calico-system" Pod="calico-apiserver-858fc86974-6fm4q" WorkloadEndpoint="localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0", GenerateName:"calico-apiserver-858fc86974-", Namespace:"calico-system", SelfLink:"", UID:"be5517b6-6d7c-4af7-8c09-ffaa013ba114", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"858fc86974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-858fc86974-6fm4q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia5e40ebc88d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:29.720733 containerd[1584]: 2026-04-14 01:11:29.642 [INFO][4199] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9" Namespace="calico-system" Pod="calico-apiserver-858fc86974-6fm4q" WorkloadEndpoint="localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0" Apr 14 01:11:29.720733 containerd[1584]: 2026-04-14 01:11:29.642 [INFO][4199] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5e40ebc88d ContainerID="83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9" Namespace="calico-system" Pod="calico-apiserver-858fc86974-6fm4q" WorkloadEndpoint="localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0" Apr 14 01:11:29.720733 containerd[1584]: 2026-04-14 01:11:29.680 [INFO][4199] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9" Namespace="calico-system" Pod="calico-apiserver-858fc86974-6fm4q" WorkloadEndpoint="localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0" Apr 14 01:11:29.720733 containerd[1584]: 2026-04-14 01:11:29.685 [INFO][4199] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9" Namespace="calico-system" Pod="calico-apiserver-858fc86974-6fm4q" WorkloadEndpoint="localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0", GenerateName:"calico-apiserver-858fc86974-", Namespace:"calico-system", SelfLink:"", UID:"be5517b6-6d7c-4af7-8c09-ffaa013ba114", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"858fc86974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9", Pod:"calico-apiserver-858fc86974-6fm4q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia5e40ebc88d", MAC:"6a:d8:08:47:fb:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:29.720733 containerd[1584]: 2026-04-14 01:11:29.709 [INFO][4199] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9" Namespace="calico-system" Pod="calico-apiserver-858fc86974-6fm4q" WorkloadEndpoint="localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0" Apr 14 01:11:29.732500 systemd-networkd[1244]: cali41b67747784: Gained IPv6LL Apr 14 01:11:29.738827 kernel: calico-node[4420]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 14 01:11:29.750094 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 01:11:29.781493 containerd[1584]: time="2026-04-14T01:11:29.781391069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69f7f46b8c-2nl2l,Uid:0e971146-3f82-4810-b7e0-9307354ac58e,Namespace:calico-system,Attempt:1,} returns sandbox id \"108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42\"" Apr 14 01:11:29.785892 containerd[1584]: time="2026-04-14T01:11:29.785156019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 14 01:11:29.800510 containerd[1584]: time="2026-04-14T01:11:29.798931411Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:11:29.800510 containerd[1584]: time="2026-04-14T01:11:29.799041187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:11:29.800510 containerd[1584]: time="2026-04-14T01:11:29.799054328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:29.800510 containerd[1584]: time="2026-04-14T01:11:29.799137738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:29.851865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1039345851.mount: Deactivated successfully. Apr 14 01:11:29.876878 containerd[1584]: time="2026-04-14T01:11:29.876302491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-858fc86974-lgjnc,Uid:8cfb8838-233c-4923-a3d1-211c57385c00,Namespace:calico-system,Attempt:1,} returns sandbox id \"c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41\"" Apr 14 01:11:29.877876 containerd[1584]: time="2026-04-14T01:11:29.877800946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-gpjhg,Uid:c6df9bea-e3c6-4c2c-952c-aaf1341b5033,Namespace:calico-system,Attempt:1,} returns sandbox id \"d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891\"" Apr 14 01:11:29.891134 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 01:11:29.919791 kubelet[2666]: I0414 01:11:29.919270 2666 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad" path="/var/lib/kubelet/pods/3e12e9e0-f3ab-4bbd-a2fd-5a98701989ad/volumes" Apr 14 01:11:29.983292 containerd[1584]: time="2026-04-14T01:11:29.983261202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-858fc86974-6fm4q,Uid:be5517b6-6d7c-4af7-8c09-ffaa013ba114,Namespace:calico-system,Attempt:1,} returns sandbox id \"83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9\"" Apr 14 01:11:30.034868 systemd-networkd[1244]: calieb888985eb3: Link UP Apr 14 01:11:30.036688 systemd-networkd[1244]: calieb888985eb3: Gained carrier Apr 14 01:11:30.049293 systemd-networkd[1244]: cali492ccbaaeac: Gained IPv6LL Apr 14 01:11:30.062291 containerd[1584]: 2026-04-14 01:11:29.828 [INFO][4715] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--b6b7f96f4--wbp4k-eth0 whisker-b6b7f96f4- calico-system 78f967c9-27f4-4564-b1a6-08e9da295149 1049 0 2026-04-14 01:11:29 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:b6b7f96f4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-b6b7f96f4-wbp4k eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calieb888985eb3 [] [] }} ContainerID="236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8" Namespace="calico-system" Pod="whisker-b6b7f96f4-wbp4k" WorkloadEndpoint="localhost-k8s-whisker--b6b7f96f4--wbp4k-" Apr 14 01:11:30.062291 containerd[1584]: 2026-04-14 01:11:29.828 [INFO][4715] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8" Namespace="calico-system" Pod="whisker-b6b7f96f4-wbp4k" WorkloadEndpoint="localhost-k8s-whisker--b6b7f96f4--wbp4k-eth0" Apr 14 01:11:30.062291 containerd[1584]: 2026-04-14 01:11:29.905 [INFO][4793] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8" HandleID="k8s-pod-network.236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8" Workload="localhost-k8s-whisker--b6b7f96f4--wbp4k-eth0" Apr 14 01:11:30.062291 containerd[1584]: 2026-04-14 01:11:29.926 [INFO][4793] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8" HandleID="k8s-pod-network.236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8" Workload="localhost-k8s-whisker--b6b7f96f4--wbp4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000276af0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-b6b7f96f4-wbp4k", "timestamp":"2026-04-14 01:11:29.905032977 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000566420)} Apr 14 01:11:30.062291 containerd[1584]: 2026-04-14 01:11:29.926 [INFO][4793] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:30.062291 containerd[1584]: 2026-04-14 01:11:29.926 [INFO][4793] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:30.062291 containerd[1584]: 2026-04-14 01:11:29.926 [INFO][4793] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 01:11:30.062291 containerd[1584]: 2026-04-14 01:11:29.943 [INFO][4793] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8" host="localhost" Apr 14 01:11:30.062291 containerd[1584]: 2026-04-14 01:11:29.972 [INFO][4793] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 01:11:30.062291 containerd[1584]: 2026-04-14 01:11:29.985 [INFO][4793] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 01:11:30.062291 containerd[1584]: 2026-04-14 01:11:29.990 [INFO][4793] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 01:11:30.062291 containerd[1584]: 2026-04-14 01:11:29.997 [INFO][4793] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 01:11:30.062291 containerd[1584]: 2026-04-14 01:11:29.998 [INFO][4793] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8" host="localhost" Apr 14 01:11:30.062291 containerd[1584]: 2026-04-14 01:11:30.001 [INFO][4793] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8 Apr 14 01:11:30.062291 containerd[1584]: 2026-04-14 01:11:30.010 [INFO][4793] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8" host="localhost" Apr 14 01:11:30.062291 containerd[1584]: 2026-04-14 01:11:30.024 [INFO][4793] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8" host="localhost" Apr 14 01:11:30.062291 containerd[1584]: 2026-04-14 01:11:30.025 [INFO][4793] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8" host="localhost" Apr 14 01:11:30.062291 containerd[1584]: 2026-04-14 01:11:30.025 [INFO][4793] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:30.062291 containerd[1584]: 2026-04-14 01:11:30.026 [INFO][4793] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8" HandleID="k8s-pod-network.236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8" Workload="localhost-k8s-whisker--b6b7f96f4--wbp4k-eth0" Apr 14 01:11:30.063174 containerd[1584]: 2026-04-14 01:11:30.031 [INFO][4715] cni-plugin/k8s.go 418: Populated endpoint ContainerID="236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8" Namespace="calico-system" Pod="whisker-b6b7f96f4-wbp4k" WorkloadEndpoint="localhost-k8s-whisker--b6b7f96f4--wbp4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--b6b7f96f4--wbp4k-eth0", GenerateName:"whisker-b6b7f96f4-", Namespace:"calico-system", SelfLink:"", UID:"78f967c9-27f4-4564-b1a6-08e9da295149", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b6b7f96f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-b6b7f96f4-wbp4k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieb888985eb3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:30.063174 containerd[1584]: 2026-04-14 01:11:30.031 [INFO][4715] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8" Namespace="calico-system" Pod="whisker-b6b7f96f4-wbp4k" WorkloadEndpoint="localhost-k8s-whisker--b6b7f96f4--wbp4k-eth0" Apr 14 01:11:30.063174 containerd[1584]: 2026-04-14 01:11:30.031 [INFO][4715] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb888985eb3 ContainerID="236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8" Namespace="calico-system" Pod="whisker-b6b7f96f4-wbp4k" WorkloadEndpoint="localhost-k8s-whisker--b6b7f96f4--wbp4k-eth0" Apr 14 01:11:30.063174 containerd[1584]: 2026-04-14 01:11:30.037 [INFO][4715] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8" Namespace="calico-system" Pod="whisker-b6b7f96f4-wbp4k" WorkloadEndpoint="localhost-k8s-whisker--b6b7f96f4--wbp4k-eth0" Apr 14 01:11:30.063174 containerd[1584]: 2026-04-14 01:11:30.037 [INFO][4715] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8" Namespace="calico-system" Pod="whisker-b6b7f96f4-wbp4k" WorkloadEndpoint="localhost-k8s-whisker--b6b7f96f4--wbp4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--b6b7f96f4--wbp4k-eth0", GenerateName:"whisker-b6b7f96f4-", Namespace:"calico-system", SelfLink:"", UID:"78f967c9-27f4-4564-b1a6-08e9da295149", ResourceVersion:"1049", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"b6b7f96f4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8", Pod:"whisker-b6b7f96f4-wbp4k", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calieb888985eb3", MAC:"4e:21:04:58:23:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:30.063174 containerd[1584]: 2026-04-14 01:11:30.058 [INFO][4715] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8" Namespace="calico-system" Pod="whisker-b6b7f96f4-wbp4k" WorkloadEndpoint="localhost-k8s-whisker--b6b7f96f4--wbp4k-eth0" Apr 14 01:11:30.096461 containerd[1584]: time="2026-04-14T01:11:30.095828272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:11:30.096461 containerd[1584]: time="2026-04-14T01:11:30.095952897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:11:30.096461 containerd[1584]: time="2026-04-14T01:11:30.095966097Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:30.099557 containerd[1584]: time="2026-04-14T01:11:30.097402506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:30.121482 kubelet[2666]: E0414 01:11:30.121425 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:30.135508 kubelet[2666]: E0414 01:11:30.133456 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:30.144389 kubelet[2666]: I0414 01:11:30.144151 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pdxgp" podStartSLOduration=34.144123383 podStartE2EDuration="34.144123383s" podCreationTimestamp="2026-04-14 01:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:11:30.142027668 +0000 UTC m=+40.386905945" watchObservedRunningTime="2026-04-14 01:11:30.144123383 +0000 UTC m=+40.389001661" Apr 14 01:11:30.189956 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 01:11:30.207830 kubelet[2666]: I0414 01:11:30.207623 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-t9cp8" podStartSLOduration=34.20758506 podStartE2EDuration="34.20758506s" podCreationTimestamp="2026-04-14 01:10:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-14 01:11:30.206517265 +0000 UTC m=+40.451395547" watchObservedRunningTime="2026-04-14 01:11:30.20758506 +0000 UTC m=+40.452463344" Apr 14 01:11:30.262173 containerd[1584]: time="2026-04-14T01:11:30.262031786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-b6b7f96f4-wbp4k,Uid:78f967c9-27f4-4564-b1a6-08e9da295149,Namespace:calico-system,Attempt:0,} returns sandbox id \"236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8\"" Apr 14 01:11:30.407185 systemd-networkd[1244]: vxlan.calico: Link UP Apr 14 01:11:30.407190 systemd-networkd[1244]: vxlan.calico: Gained carrier Apr 14 01:11:30.561271 systemd-networkd[1244]: calia3ad035e267: Gained IPv6LL Apr 14 01:11:31.072913 systemd-networkd[1244]: calia5e40ebc88d: Gained IPv6LL Apr 14 01:11:31.137905 kubelet[2666]: E0414 01:11:31.136844 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:31.137905 kubelet[2666]: E0414 01:11:31.136904 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:31.202807 systemd-networkd[1244]: cali4fbea8dda84: Gained IPv6LL Apr 14 01:11:31.264580 systemd-networkd[1244]: cali9eeeaded672: Gained IPv6LL Apr 14 01:11:31.649046 systemd-networkd[1244]: calieb888985eb3: Gained IPv6LL Apr 14 01:11:31.667591 systemd[1]: Started sshd@8-10.0.0.8:22-10.0.0.1:47268.service - OpenSSH per-connection server daemon (10.0.0.1:47268). Apr 14 01:11:31.720967 sshd[4979]: Accepted publickey for core from 10.0.0.1 port 47268 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:31.723829 sshd[4979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:31.728687 systemd-logind[1568]: New session 9 of user core. Apr 14 01:11:31.737870 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 14 01:11:32.036083 sshd[4979]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:32.041183 systemd-logind[1568]: Session 9 logged out. Waiting for processes to exit. Apr 14 01:11:32.042269 systemd[1]: sshd@8-10.0.0.8:22-10.0.0.1:47268.service: Deactivated successfully. Apr 14 01:11:32.046155 systemd[1]: session-9.scope: Deactivated successfully. Apr 14 01:11:32.048030 systemd-logind[1568]: Removed session 9. Apr 14 01:11:32.142661 kubelet[2666]: E0414 01:11:32.142603 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:32.142661 kubelet[2666]: E0414 01:11:32.142680 2666 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Apr 14 01:11:32.352672 systemd-networkd[1244]: vxlan.calico: Gained IPv6LL Apr 14 01:11:33.036659 containerd[1584]: time="2026-04-14T01:11:33.036498425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:33.038221 containerd[1584]: time="2026-04-14T01:11:33.038021666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 14 01:11:33.039363 containerd[1584]: time="2026-04-14T01:11:33.039262130Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:33.042084 containerd[1584]: time="2026-04-14T01:11:33.041955243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:33.042824 containerd[1584]: time="2026-04-14T01:11:33.042774835Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.257132194s" Apr 14 01:11:33.042824 containerd[1584]: time="2026-04-14T01:11:33.042810764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 14 01:11:33.044445 containerd[1584]: time="2026-04-14T01:11:33.044404062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 14 01:11:33.055738 containerd[1584]: time="2026-04-14T01:11:33.055695732Z" level=info msg="CreateContainer within sandbox \"108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 14 01:11:33.070225 containerd[1584]: time="2026-04-14T01:11:33.070126575Z" level=info msg="CreateContainer within sandbox \"108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a99b8e947e50f6f5d5f308ee0cab0e1b270b2145eb9daa450786de4dbb821871\"" Apr 14 01:11:33.073028 containerd[1584]: time="2026-04-14T01:11:33.071877746Z" level=info msg="StartContainer for \"a99b8e947e50f6f5d5f308ee0cab0e1b270b2145eb9daa450786de4dbb821871\"" Apr 14 01:11:33.157902 containerd[1584]: time="2026-04-14T01:11:33.157755684Z" level=info msg="StartContainer for \"a99b8e947e50f6f5d5f308ee0cab0e1b270b2145eb9daa450786de4dbb821871\" returns successfully" Apr 14 01:11:34.193844 systemd[1]: run-containerd-runc-k8s.io-a99b8e947e50f6f5d5f308ee0cab0e1b270b2145eb9daa450786de4dbb821871-runc.P36x9x.mount: Deactivated successfully. Apr 14 01:11:34.236964 kubelet[2666]: I0414 01:11:34.236809 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69f7f46b8c-2nl2l" podStartSLOduration=24.975914599 podStartE2EDuration="28.236793151s" podCreationTimestamp="2026-04-14 01:11:06 +0000 UTC" firstStartedPulling="2026-04-14 01:11:29.783005259 +0000 UTC m=+40.027883547" lastFinishedPulling="2026-04-14 01:11:33.04388382 +0000 UTC m=+43.288762099" observedRunningTime="2026-04-14 01:11:34.171648978 +0000 UTC m=+44.416527273" watchObservedRunningTime="2026-04-14 01:11:34.236793151 +0000 UTC m=+44.481671439" Apr 14 01:11:35.477291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1952157288.mount: Deactivated successfully. Apr 14 01:11:35.992040 containerd[1584]: time="2026-04-14T01:11:35.991825992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:35.993155 containerd[1584]: time="2026-04-14T01:11:35.993070448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 14 01:11:35.994796 containerd[1584]: time="2026-04-14T01:11:35.994615320Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:36.005291 containerd[1584]: time="2026-04-14T01:11:36.005098830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:36.006964 containerd[1584]: time="2026-04-14T01:11:36.006817972Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.962375936s" Apr 14 01:11:36.006964 containerd[1584]: time="2026-04-14T01:11:36.006913290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 14 01:11:36.009930 containerd[1584]: time="2026-04-14T01:11:36.009826020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 14 01:11:36.016980 containerd[1584]: time="2026-04-14T01:11:36.016836209Z" level=info msg="CreateContainer within sandbox \"d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 14 01:11:36.077950 containerd[1584]: time="2026-04-14T01:11:36.077800939Z" level=info msg="CreateContainer within sandbox \"d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"cdec8679418d9383cff79d33c7c7f14e2feee2aa5572ef3097720632fde8c46f\"" Apr 14 01:11:36.078752 containerd[1584]: time="2026-04-14T01:11:36.078712055Z" level=info msg="StartContainer for \"cdec8679418d9383cff79d33c7c7f14e2feee2aa5572ef3097720632fde8c46f\"" Apr 14 01:11:36.157551 containerd[1584]: time="2026-04-14T01:11:36.157398613Z" level=info msg="StartContainer for \"cdec8679418d9383cff79d33c7c7f14e2feee2aa5572ef3097720632fde8c46f\" returns successfully" Apr 14 01:11:37.048901 systemd[1]: Started sshd@9-10.0.0.8:22-10.0.0.1:57140.service - OpenSSH per-connection server daemon (10.0.0.1:57140). Apr 14 01:11:37.096548 sshd[5141]: Accepted publickey for core from 10.0.0.1 port 57140 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:37.098108 sshd[5141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:37.103948 systemd-logind[1568]: New session 10 of user core. Apr 14 01:11:37.118081 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 14 01:11:37.334758 sshd[5141]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:37.338423 systemd[1]: sshd@9-10.0.0.8:22-10.0.0.1:57140.service: Deactivated successfully. Apr 14 01:11:37.340636 systemd-logind[1568]: Session 10 logged out. Waiting for processes to exit. Apr 14 01:11:37.340703 systemd[1]: session-10.scope: Deactivated successfully. Apr 14 01:11:37.341888 systemd-logind[1568]: Removed session 10. Apr 14 01:11:38.964702 containerd[1584]: time="2026-04-14T01:11:38.956983006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:38.965145 containerd[1584]: time="2026-04-14T01:11:38.958043440Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 14 01:11:38.965145 containerd[1584]: time="2026-04-14T01:11:38.963252391Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 2.953377104s" Apr 14 01:11:38.965145 containerd[1584]: time="2026-04-14T01:11:38.964866544Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 14 01:11:38.965458 containerd[1584]: time="2026-04-14T01:11:38.965239589Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:38.965829 containerd[1584]: time="2026-04-14T01:11:38.965765168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:38.966964 containerd[1584]: time="2026-04-14T01:11:38.966949128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 14 01:11:39.003096 containerd[1584]: time="2026-04-14T01:11:39.002950102Z" level=info msg="CreateContainer within sandbox \"c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 14 01:11:39.023692 containerd[1584]: time="2026-04-14T01:11:39.023523279Z" level=info msg="CreateContainer within sandbox \"c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a58752044a2225c71d4ea4b550d9778a55232df8003e2ab3e489eebb4070d240\"" Apr 14 01:11:39.026131 containerd[1584]: time="2026-04-14T01:11:39.024625240Z" level=info msg="StartContainer for \"a58752044a2225c71d4ea4b550d9778a55232df8003e2ab3e489eebb4070d240\"" Apr 14 01:11:39.131350 containerd[1584]: time="2026-04-14T01:11:39.131211205Z" level=info msg="StartContainer for \"a58752044a2225c71d4ea4b550d9778a55232df8003e2ab3e489eebb4070d240\" returns successfully" Apr 14 01:11:39.203861 kubelet[2666]: I0414 01:11:39.202511 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-858fc86974-lgjnc" podStartSLOduration=24.153557442 podStartE2EDuration="33.202495803s" podCreationTimestamp="2026-04-14 01:11:06 +0000 UTC" firstStartedPulling="2026-04-14 01:11:29.917613337 +0000 UTC m=+40.162491615" lastFinishedPulling="2026-04-14 01:11:38.966551698 +0000 UTC m=+49.211429976" observedRunningTime="2026-04-14 01:11:39.201545595 +0000 UTC m=+49.446423877" watchObservedRunningTime="2026-04-14 01:11:39.202495803 +0000 UTC m=+49.447374092" Apr 14 01:11:39.203861 kubelet[2666]: I0414 01:11:39.202669 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-gpjhg" podStartSLOduration=27.09248832 podStartE2EDuration="33.202664819s" podCreationTimestamp="2026-04-14 01:11:06 +0000 UTC" firstStartedPulling="2026-04-14 01:11:29.898534896 +0000 UTC m=+40.143413174" lastFinishedPulling="2026-04-14 01:11:36.008711395 +0000 UTC m=+46.253589673" observedRunningTime="2026-04-14 01:11:37.191259594 +0000 UTC m=+47.436137888" watchObservedRunningTime="2026-04-14 01:11:39.202664819 +0000 UTC m=+49.447543105" Apr 14 01:11:39.384634 containerd[1584]: time="2026-04-14T01:11:39.384073315Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:39.385841 containerd[1584]: time="2026-04-14T01:11:39.385777608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 14 01:11:39.389215 containerd[1584]: time="2026-04-14T01:11:39.389089931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 421.924429ms" Apr 14 01:11:39.389215 containerd[1584]: time="2026-04-14T01:11:39.389198692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 14 01:11:39.390693 containerd[1584]: time="2026-04-14T01:11:39.390389234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 14 01:11:39.395531 containerd[1584]: time="2026-04-14T01:11:39.395495290Z" level=info msg="CreateContainer within sandbox \"83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 14 01:11:39.442794 containerd[1584]: time="2026-04-14T01:11:39.442673100Z" level=info msg="CreateContainer within sandbox \"83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"29d8267f650725b7261768d171ed6a93c3f288514db7e665e0abdc28138c5b5c\"" Apr 14 01:11:39.443745 containerd[1584]: time="2026-04-14T01:11:39.443719676Z" level=info msg="StartContainer for \"29d8267f650725b7261768d171ed6a93c3f288514db7e665e0abdc28138c5b5c\"" Apr 14 01:11:39.534982 containerd[1584]: time="2026-04-14T01:11:39.534758958Z" level=info msg="StartContainer for \"29d8267f650725b7261768d171ed6a93c3f288514db7e665e0abdc28138c5b5c\" returns successfully" Apr 14 01:11:39.876370 containerd[1584]: time="2026-04-14T01:11:39.875741351Z" level=info msg="StopPodSandbox for \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\"" Apr 14 01:11:40.086932 containerd[1584]: 2026-04-14 01:11:39.978 [INFO][5321] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Apr 14 01:11:40.086932 containerd[1584]: 2026-04-14 01:11:39.985 [INFO][5321] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" iface="eth0" netns="/var/run/netns/cni-aeaeccfa-f13b-2613-72c7-8c7cddf05e43" Apr 14 01:11:40.086932 containerd[1584]: 2026-04-14 01:11:39.985 [INFO][5321] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" iface="eth0" netns="/var/run/netns/cni-aeaeccfa-f13b-2613-72c7-8c7cddf05e43" Apr 14 01:11:40.086932 containerd[1584]: 2026-04-14 01:11:39.986 [INFO][5321] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" iface="eth0" netns="/var/run/netns/cni-aeaeccfa-f13b-2613-72c7-8c7cddf05e43" Apr 14 01:11:40.086932 containerd[1584]: 2026-04-14 01:11:39.986 [INFO][5321] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Apr 14 01:11:40.086932 containerd[1584]: 2026-04-14 01:11:39.986 [INFO][5321] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Apr 14 01:11:40.086932 containerd[1584]: 2026-04-14 01:11:40.064 [INFO][5331] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" HandleID="k8s-pod-network.13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Workload="localhost-k8s-csi--node--driver--5mzw4-eth0" Apr 14 01:11:40.086932 containerd[1584]: 2026-04-14 01:11:40.066 [INFO][5331] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:40.086932 containerd[1584]: 2026-04-14 01:11:40.066 [INFO][5331] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:40.086932 containerd[1584]: 2026-04-14 01:11:40.074 [WARNING][5331] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" HandleID="k8s-pod-network.13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Workload="localhost-k8s-csi--node--driver--5mzw4-eth0" Apr 14 01:11:40.086932 containerd[1584]: 2026-04-14 01:11:40.074 [INFO][5331] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" HandleID="k8s-pod-network.13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Workload="localhost-k8s-csi--node--driver--5mzw4-eth0" Apr 14 01:11:40.086932 containerd[1584]: 2026-04-14 01:11:40.076 [INFO][5331] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:40.086932 containerd[1584]: 2026-04-14 01:11:40.079 [INFO][5321] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Apr 14 01:11:40.089587 systemd[1]: run-netns-cni\x2daeaeccfa\x2df13b\x2d2613\x2d72c7\x2d8c7cddf05e43.mount: Deactivated successfully. Apr 14 01:11:40.090763 containerd[1584]: time="2026-04-14T01:11:40.090704390Z" level=info msg="TearDown network for sandbox \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\" successfully" Apr 14 01:11:40.090804 containerd[1584]: time="2026-04-14T01:11:40.090769094Z" level=info msg="StopPodSandbox for \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\" returns successfully" Apr 14 01:11:40.091926 containerd[1584]: time="2026-04-14T01:11:40.091755129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5mzw4,Uid:241647e8-70b4-4fa4-aa60-9aba8555b739,Namespace:calico-system,Attempt:1,}" Apr 14 01:11:40.200989 kubelet[2666]: I0414 01:11:40.200890 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 14 01:11:40.309450 systemd-networkd[1244]: calid42d66c7e60: Link UP Apr 14 01:11:40.310614 systemd-networkd[1244]: calid42d66c7e60: Gained carrier Apr 14 01:11:40.337580 kubelet[2666]: I0414 01:11:40.337473 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-858fc86974-6fm4q" podStartSLOduration=24.931760612 podStartE2EDuration="34.337457322s" podCreationTimestamp="2026-04-14 01:11:06 +0000 UTC" firstStartedPulling="2026-04-14 01:11:29.984531385 +0000 UTC m=+40.229409662" lastFinishedPulling="2026-04-14 01:11:39.390228092 +0000 UTC m=+49.635106372" observedRunningTime="2026-04-14 01:11:40.215410539 +0000 UTC m=+50.460288817" watchObservedRunningTime="2026-04-14 01:11:40.337457322 +0000 UTC m=+50.582335611" Apr 14 01:11:40.347377 containerd[1584]: 2026-04-14 01:11:40.181 [INFO][5338] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5mzw4-eth0 csi-node-driver- calico-system 241647e8-70b4-4fa4-aa60-9aba8555b739 1173 0 2026-04-14 01:11:06 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5mzw4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid42d66c7e60 [] [] }} ContainerID="e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042" Namespace="calico-system" Pod="csi-node-driver-5mzw4" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mzw4-" Apr 14 01:11:40.347377 containerd[1584]: 2026-04-14 01:11:40.181 [INFO][5338] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042" Namespace="calico-system" Pod="csi-node-driver-5mzw4" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mzw4-eth0" Apr 14 01:11:40.347377 containerd[1584]: 2026-04-14 01:11:40.237 [INFO][5352] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042" HandleID="k8s-pod-network.e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042" Workload="localhost-k8s-csi--node--driver--5mzw4-eth0" Apr 14 01:11:40.347377 containerd[1584]: 2026-04-14 01:11:40.244 [INFO][5352] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042" HandleID="k8s-pod-network.e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042" Workload="localhost-k8s-csi--node--driver--5mzw4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00047c130), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5mzw4", "timestamp":"2026-04-14 01:11:40.237424579 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000184000)} Apr 14 01:11:40.347377 containerd[1584]: 2026-04-14 01:11:40.244 [INFO][5352] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:40.347377 containerd[1584]: 2026-04-14 01:11:40.244 [INFO][5352] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:40.347377 containerd[1584]: 2026-04-14 01:11:40.245 [INFO][5352] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Apr 14 01:11:40.347377 containerd[1584]: 2026-04-14 01:11:40.246 [INFO][5352] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042" host="localhost" Apr 14 01:11:40.347377 containerd[1584]: 2026-04-14 01:11:40.254 [INFO][5352] ipam/ipam.go 409: Looking up existing affinities for host host="localhost" Apr 14 01:11:40.347377 containerd[1584]: 2026-04-14 01:11:40.265 [INFO][5352] ipam/ipam.go 526: Trying affinity for 192.168.88.128/26 host="localhost" Apr 14 01:11:40.347377 containerd[1584]: 2026-04-14 01:11:40.268 [INFO][5352] ipam/ipam.go 160: Attempting to load block cidr=192.168.88.128/26 host="localhost" Apr 14 01:11:40.347377 containerd[1584]: 2026-04-14 01:11:40.273 [INFO][5352] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Apr 14 01:11:40.347377 containerd[1584]: 2026-04-14 01:11:40.273 [INFO][5352] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042" host="localhost" Apr 14 01:11:40.347377 containerd[1584]: 2026-04-14 01:11:40.278 [INFO][5352] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042 Apr 14 01:11:40.347377 containerd[1584]: 2026-04-14 01:11:40.291 [INFO][5352] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042" host="localhost" Apr 14 01:11:40.347377 containerd[1584]: 2026-04-14 01:11:40.301 [INFO][5352] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042" host="localhost" Apr 14 01:11:40.347377 containerd[1584]: 2026-04-14 01:11:40.301 [INFO][5352] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042" host="localhost" Apr 14 01:11:40.347377 containerd[1584]: 2026-04-14 01:11:40.302 [INFO][5352] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:40.347377 containerd[1584]: 2026-04-14 01:11:40.302 [INFO][5352] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042" HandleID="k8s-pod-network.e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042" Workload="localhost-k8s-csi--node--driver--5mzw4-eth0" Apr 14 01:11:40.350163 containerd[1584]: 2026-04-14 01:11:40.306 [INFO][5338] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042" Namespace="calico-system" Pod="csi-node-driver-5mzw4" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mzw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5mzw4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"241647e8-70b4-4fa4-aa60-9aba8555b739", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5mzw4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid42d66c7e60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:40.350163 containerd[1584]: 2026-04-14 01:11:40.306 [INFO][5338] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042" Namespace="calico-system" Pod="csi-node-driver-5mzw4" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mzw4-eth0" Apr 14 01:11:40.350163 containerd[1584]: 2026-04-14 01:11:40.306 [INFO][5338] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid42d66c7e60 ContainerID="e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042" Namespace="calico-system" Pod="csi-node-driver-5mzw4" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mzw4-eth0" Apr 14 01:11:40.350163 containerd[1584]: 2026-04-14 01:11:40.310 [INFO][5338] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042" Namespace="calico-system" Pod="csi-node-driver-5mzw4" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mzw4-eth0" Apr 14 01:11:40.350163 containerd[1584]: 2026-04-14 01:11:40.312 [INFO][5338] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042" Namespace="calico-system" Pod="csi-node-driver-5mzw4" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mzw4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5mzw4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"241647e8-70b4-4fa4-aa60-9aba8555b739", ResourceVersion:"1173", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042", Pod:"csi-node-driver-5mzw4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid42d66c7e60", MAC:"8a:85:4a:1b:09:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:40.350163 containerd[1584]: 2026-04-14 01:11:40.338 [INFO][5338] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042" Namespace="calico-system" Pod="csi-node-driver-5mzw4" WorkloadEndpoint="localhost-k8s-csi--node--driver--5mzw4-eth0" Apr 14 01:11:40.397301 containerd[1584]: time="2026-04-14T01:11:40.397111659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 14 01:11:40.397301 containerd[1584]: time="2026-04-14T01:11:40.397440439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 14 01:11:40.398180 containerd[1584]: time="2026-04-14T01:11:40.397900094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:40.398667 containerd[1584]: time="2026-04-14T01:11:40.398593368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 14 01:11:40.448936 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Apr 14 01:11:40.465628 containerd[1584]: time="2026-04-14T01:11:40.465469104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5mzw4,Uid:241647e8-70b4-4fa4-aa60-9aba8555b739,Namespace:calico-system,Attempt:1,} returns sandbox id \"e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042\"" Apr 14 01:11:41.197243 containerd[1584]: time="2026-04-14T01:11:41.197004498Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:41.197822 containerd[1584]: time="2026-04-14T01:11:41.197795941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 14 01:11:41.198956 containerd[1584]: time="2026-04-14T01:11:41.198909533Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:41.226870 kubelet[2666]: I0414 01:11:41.226753 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 14 01:11:41.231640 containerd[1584]: time="2026-04-14T01:11:41.231007369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:41.236212 containerd[1584]: time="2026-04-14T01:11:41.235517160Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.845019106s" Apr 14 01:11:41.236212 containerd[1584]: time="2026-04-14T01:11:41.235802228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 14 01:11:41.245223 containerd[1584]: time="2026-04-14T01:11:41.243984504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 14 01:11:41.292020 containerd[1584]: time="2026-04-14T01:11:41.291914709Z" level=info msg="CreateContainer within sandbox \"236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 14 01:11:41.308808 containerd[1584]: time="2026-04-14T01:11:41.308705569Z" level=info msg="CreateContainer within sandbox \"236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"1bdf54ca9b0cca332951b2496620fc152823655ea999b8a7b02e33018dd09e08\"" Apr 14 01:11:41.309525 containerd[1584]: time="2026-04-14T01:11:41.309501618Z" level=info msg="StartContainer for \"1bdf54ca9b0cca332951b2496620fc152823655ea999b8a7b02e33018dd09e08\"" Apr 14 01:11:41.360144 systemd[1]: run-containerd-runc-k8s.io-1bdf54ca9b0cca332951b2496620fc152823655ea999b8a7b02e33018dd09e08-runc.AvlgoH.mount: Deactivated successfully. Apr 14 01:11:41.416066 containerd[1584]: time="2026-04-14T01:11:41.415957607Z" level=info msg="StartContainer for \"1bdf54ca9b0cca332951b2496620fc152823655ea999b8a7b02e33018dd09e08\" returns successfully" Apr 14 01:11:41.952708 systemd-networkd[1244]: calid42d66c7e60: Gained IPv6LL Apr 14 01:11:42.353093 systemd[1]: Started sshd@10-10.0.0.8:22-10.0.0.1:57150.service - OpenSSH per-connection server daemon (10.0.0.1:57150). Apr 14 01:11:42.400302 sshd[5502]: Accepted publickey for core from 10.0.0.1 port 57150 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:42.402246 sshd[5502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:42.406698 systemd-logind[1568]: New session 11 of user core. Apr 14 01:11:42.413903 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 14 01:11:42.724415 sshd[5502]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:42.733609 systemd[1]: Started sshd@11-10.0.0.8:22-10.0.0.1:57154.service - OpenSSH per-connection server daemon (10.0.0.1:57154). Apr 14 01:11:42.733914 systemd[1]: sshd@10-10.0.0.8:22-10.0.0.1:57150.service: Deactivated successfully. Apr 14 01:11:42.736528 systemd-logind[1568]: Session 11 logged out. Waiting for processes to exit. Apr 14 01:11:42.737303 systemd[1]: session-11.scope: Deactivated successfully. Apr 14 01:11:42.738269 systemd-logind[1568]: Removed session 11. Apr 14 01:11:42.761207 sshd[5515]: Accepted publickey for core from 10.0.0.1 port 57154 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:42.764029 sshd[5515]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:42.772063 systemd-logind[1568]: New session 12 of user core. Apr 14 01:11:42.776883 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 14 01:11:42.976845 sshd[5515]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:42.989514 systemd[1]: Started sshd@12-10.0.0.8:22-10.0.0.1:57156.service - OpenSSH per-connection server daemon (10.0.0.1:57156). Apr 14 01:11:42.997784 systemd[1]: sshd@11-10.0.0.8:22-10.0.0.1:57154.service: Deactivated successfully. Apr 14 01:11:42.999954 systemd[1]: session-12.scope: Deactivated successfully. Apr 14 01:11:43.014973 systemd-logind[1568]: Session 12 logged out. Waiting for processes to exit. Apr 14 01:11:43.021545 systemd-logind[1568]: Removed session 12. Apr 14 01:11:43.057442 sshd[5533]: Accepted publickey for core from 10.0.0.1 port 57156 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:43.063897 sshd[5533]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:43.073639 systemd-logind[1568]: New session 13 of user core. Apr 14 01:11:43.085821 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 14 01:11:43.152378 containerd[1584]: time="2026-04-14T01:11:43.151676946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:43.154941 containerd[1584]: time="2026-04-14T01:11:43.154781653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 14 01:11:43.156651 containerd[1584]: time="2026-04-14T01:11:43.156413552Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:43.160373 containerd[1584]: time="2026-04-14T01:11:43.159391518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:43.160373 containerd[1584]: time="2026-04-14T01:11:43.160129153Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.915883019s" Apr 14 01:11:43.160373 containerd[1584]: time="2026-04-14T01:11:43.160149327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 14 01:11:43.161620 containerd[1584]: time="2026-04-14T01:11:43.161585112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 14 01:11:43.168297 containerd[1584]: time="2026-04-14T01:11:43.167688055Z" level=info msg="CreateContainer within sandbox \"e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 14 01:11:43.231109 containerd[1584]: time="2026-04-14T01:11:43.230928861Z" level=info msg="CreateContainer within sandbox \"e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a9c1f48c00d14497a07e34958ceab43a8e896407db75e9285e006cef1723a416\"" Apr 14 01:11:43.232091 containerd[1584]: time="2026-04-14T01:11:43.232060193Z" level=info msg="StartContainer for \"a9c1f48c00d14497a07e34958ceab43a8e896407db75e9285e006cef1723a416\"" Apr 14 01:11:43.247254 sshd[5533]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:43.254047 systemd[1]: sshd@12-10.0.0.8:22-10.0.0.1:57156.service: Deactivated successfully. Apr 14 01:11:43.257705 systemd-logind[1568]: Session 13 logged out. Waiting for processes to exit. Apr 14 01:11:43.258777 systemd[1]: session-13.scope: Deactivated successfully. Apr 14 01:11:43.259808 systemd-logind[1568]: Removed session 13. Apr 14 01:11:43.299474 containerd[1584]: time="2026-04-14T01:11:43.299263761Z" level=info msg="StartContainer for \"a9c1f48c00d14497a07e34958ceab43a8e896407db75e9285e006cef1723a416\" returns successfully" Apr 14 01:11:45.331469 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4037030116.mount: Deactivated successfully. Apr 14 01:11:45.357804 containerd[1584]: time="2026-04-14T01:11:45.357642673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:45.358740 containerd[1584]: time="2026-04-14T01:11:45.358678282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 14 01:11:45.360376 containerd[1584]: time="2026-04-14T01:11:45.360275311Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:45.363952 containerd[1584]: time="2026-04-14T01:11:45.362797680Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:45.363952 containerd[1584]: time="2026-04-14T01:11:45.363513472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.201892603s" Apr 14 01:11:45.363952 containerd[1584]: time="2026-04-14T01:11:45.363544310Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 14 01:11:45.365008 containerd[1584]: time="2026-04-14T01:11:45.364730651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 14 01:11:45.371743 containerd[1584]: time="2026-04-14T01:11:45.371565413Z" level=info msg="CreateContainer within sandbox \"236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 14 01:11:45.390395 containerd[1584]: time="2026-04-14T01:11:45.390124157Z" level=info msg="CreateContainer within sandbox \"236a58c47354b73019d8f39f29215de733e89fd8952392c39947ba2ce603d3b8\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0e3f939d20c7abafc9e84f45a46b7b58e48a166d7658eeabee5fcc7edcbcc70b\"" Apr 14 01:11:45.392558 containerd[1584]: time="2026-04-14T01:11:45.392375813Z" level=info msg="StartContainer for \"0e3f939d20c7abafc9e84f45a46b7b58e48a166d7658eeabee5fcc7edcbcc70b\"" Apr 14 01:11:45.498451 containerd[1584]: time="2026-04-14T01:11:45.498279669Z" level=info msg="StartContainer for \"0e3f939d20c7abafc9e84f45a46b7b58e48a166d7658eeabee5fcc7edcbcc70b\" returns successfully" Apr 14 01:11:47.843590 containerd[1584]: time="2026-04-14T01:11:47.843450735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:47.845280 containerd[1584]: time="2026-04-14T01:11:47.845107347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 14 01:11:47.848494 containerd[1584]: time="2026-04-14T01:11:47.848288450Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:47.854268 containerd[1584]: time="2026-04-14T01:11:47.854111699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 14 01:11:47.855131 containerd[1584]: time="2026-04-14T01:11:47.854996897Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.490235118s" Apr 14 01:11:47.855131 containerd[1584]: time="2026-04-14T01:11:47.855085844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 14 01:11:47.863304 containerd[1584]: time="2026-04-14T01:11:47.863092312Z" level=info msg="CreateContainer within sandbox \"e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 14 01:11:47.879692 containerd[1584]: time="2026-04-14T01:11:47.879550517Z" level=info msg="CreateContainer within sandbox \"e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7ef5ea84a4379bbbf17f236c45bd86952cd4ed1884bff0d638985716f97baba4\"" Apr 14 01:11:47.880262 containerd[1584]: time="2026-04-14T01:11:47.880195851Z" level=info msg="StartContainer for \"7ef5ea84a4379bbbf17f236c45bd86952cd4ed1884bff0d638985716f97baba4\"" Apr 14 01:11:47.976482 containerd[1584]: time="2026-04-14T01:11:47.976291178Z" level=info msg="StartContainer for \"7ef5ea84a4379bbbf17f236c45bd86952cd4ed1884bff0d638985716f97baba4\" returns successfully" Apr 14 01:11:48.269100 systemd[1]: Started sshd@13-10.0.0.8:22-10.0.0.1:52426.service - OpenSSH per-connection server daemon (10.0.0.1:52426). Apr 14 01:11:48.300507 kubelet[2666]: I0414 01:11:48.299635 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5mzw4" podStartSLOduration=34.913200148 podStartE2EDuration="42.299614549s" podCreationTimestamp="2026-04-14 01:11:06 +0000 UTC" firstStartedPulling="2026-04-14 01:11:40.469750518 +0000 UTC m=+50.714628797" lastFinishedPulling="2026-04-14 01:11:47.856164914 +0000 UTC m=+58.101043198" observedRunningTime="2026-04-14 01:11:48.299205037 +0000 UTC m=+58.544083331" watchObservedRunningTime="2026-04-14 01:11:48.299614549 +0000 UTC m=+58.544492846" Apr 14 01:11:48.300507 kubelet[2666]: I0414 01:11:48.300058 2666 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-b6b7f96f4-wbp4k" podStartSLOduration=4.199464809 podStartE2EDuration="19.300047897s" podCreationTimestamp="2026-04-14 01:11:29 +0000 UTC" firstStartedPulling="2026-04-14 01:11:30.263959863 +0000 UTC m=+40.508838141" lastFinishedPulling="2026-04-14 01:11:45.364542951 +0000 UTC m=+55.609421229" observedRunningTime="2026-04-14 01:11:46.290907162 +0000 UTC m=+56.535785459" watchObservedRunningTime="2026-04-14 01:11:48.300047897 +0000 UTC m=+58.544926193" Apr 14 01:11:48.336625 sshd[5677]: Accepted publickey for core from 10.0.0.1 port 52426 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:48.341077 sshd[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:48.356972 systemd-logind[1568]: New session 14 of user core. Apr 14 01:11:48.368133 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 14 01:11:48.709398 sshd[5677]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:48.721274 systemd[1]: Started sshd@14-10.0.0.8:22-10.0.0.1:52442.service - OpenSSH per-connection server daemon (10.0.0.1:52442). Apr 14 01:11:48.721996 systemd[1]: sshd@13-10.0.0.8:22-10.0.0.1:52426.service: Deactivated successfully. Apr 14 01:11:48.727000 systemd[1]: session-14.scope: Deactivated successfully. Apr 14 01:11:48.727721 systemd-logind[1568]: Session 14 logged out. Waiting for processes to exit. Apr 14 01:11:48.729762 systemd-logind[1568]: Removed session 14. Apr 14 01:11:48.755007 sshd[5689]: Accepted publickey for core from 10.0.0.1 port 52442 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:48.757230 sshd[5689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:48.767370 systemd-logind[1568]: New session 15 of user core. Apr 14 01:11:48.782138 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 14 01:11:49.116108 sshd[5689]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:49.135195 systemd[1]: Started sshd@15-10.0.0.8:22-10.0.0.1:52444.service - OpenSSH per-connection server daemon (10.0.0.1:52444). Apr 14 01:11:49.135878 systemd[1]: sshd@14-10.0.0.8:22-10.0.0.1:52442.service: Deactivated successfully. Apr 14 01:11:49.139839 systemd[1]: session-15.scope: Deactivated successfully. Apr 14 01:11:49.142223 systemd-logind[1568]: Session 15 logged out. Waiting for processes to exit. Apr 14 01:11:49.143516 systemd-logind[1568]: Removed session 15. Apr 14 01:11:49.165070 kubelet[2666]: I0414 01:11:49.165013 2666 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 14 01:11:49.166562 kubelet[2666]: I0414 01:11:49.166042 2666 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 14 01:11:49.172978 sshd[5702]: Accepted publickey for core from 10.0.0.1 port 52444 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:49.174846 sshd[5702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:49.190621 systemd-logind[1568]: New session 16 of user core. Apr 14 01:11:49.203728 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 14 01:11:49.260600 kubelet[2666]: I0414 01:11:49.260469 2666 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 14 01:11:49.889605 containerd[1584]: time="2026-04-14T01:11:49.887457880Z" level=info msg="StopPodSandbox for \"806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab\"" Apr 14 01:11:50.020249 sshd[5702]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:50.030870 systemd[1]: Started sshd@16-10.0.0.8:22-10.0.0.1:52460.service - OpenSSH per-connection server daemon (10.0.0.1:52460). Apr 14 01:11:50.050970 systemd[1]: sshd@15-10.0.0.8:22-10.0.0.1:52444.service: Deactivated successfully. Apr 14 01:11:50.058119 systemd[1]: session-16.scope: Deactivated successfully. Apr 14 01:11:50.061798 systemd-logind[1568]: Session 16 logged out. Waiting for processes to exit. Apr 14 01:11:50.066766 systemd-logind[1568]: Removed session 16. Apr 14 01:11:50.112561 sshd[5748]: Accepted publickey for core from 10.0.0.1 port 52460 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:50.114433 sshd[5748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:50.121270 systemd-logind[1568]: New session 17 of user core. Apr 14 01:11:50.127136 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 14 01:11:50.141363 containerd[1584]: 2026-04-14 01:11:50.084 [WARNING][5736] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5c3519a7-25ed-4cbf-8b3f-53ccd47f87a7", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4", Pod:"coredns-674b8bbfcf-pdxgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41b67747784", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:50.141363 containerd[1584]: 2026-04-14 01:11:50.086 [INFO][5736] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Apr 14 01:11:50.141363 containerd[1584]: 2026-04-14 01:11:50.086 [INFO][5736] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" iface="eth0" netns="" Apr 14 01:11:50.141363 containerd[1584]: 2026-04-14 01:11:50.086 [INFO][5736] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Apr 14 01:11:50.141363 containerd[1584]: 2026-04-14 01:11:50.086 [INFO][5736] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Apr 14 01:11:50.141363 containerd[1584]: 2026-04-14 01:11:50.122 [INFO][5760] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" HandleID="k8s-pod-network.806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Workload="localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0" Apr 14 01:11:50.141363 containerd[1584]: 2026-04-14 01:11:50.123 [INFO][5760] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:50.141363 containerd[1584]: 2026-04-14 01:11:50.123 [INFO][5760] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:50.141363 containerd[1584]: 2026-04-14 01:11:50.133 [WARNING][5760] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" HandleID="k8s-pod-network.806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Workload="localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0" Apr 14 01:11:50.141363 containerd[1584]: 2026-04-14 01:11:50.134 [INFO][5760] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" HandleID="k8s-pod-network.806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Workload="localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0" Apr 14 01:11:50.141363 containerd[1584]: 2026-04-14 01:11:50.136 [INFO][5760] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:50.141363 containerd[1584]: 2026-04-14 01:11:50.138 [INFO][5736] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Apr 14 01:11:50.141363 containerd[1584]: time="2026-04-14T01:11:50.141179353Z" level=info msg="TearDown network for sandbox \"806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab\" successfully" Apr 14 01:11:50.141363 containerd[1584]: time="2026-04-14T01:11:50.141202906Z" level=info msg="StopPodSandbox for \"806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab\" returns successfully" Apr 14 01:11:50.194543 containerd[1584]: time="2026-04-14T01:11:50.194193980Z" level=info msg="RemovePodSandbox for \"806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab\"" Apr 14 01:11:50.197436 containerd[1584]: time="2026-04-14T01:11:50.197374603Z" level=info msg="Forcibly stopping sandbox \"806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab\"" Apr 14 01:11:50.322688 containerd[1584]: 2026-04-14 01:11:50.266 [WARNING][5783] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5c3519a7-25ed-4cbf-8b3f-53ccd47f87a7", ResourceVersion:"1079", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6299496b8ff70c8b795347e7ab5512e7b80eff873fd8d1cdc9f207f9329c8cc4", Pod:"coredns-674b8bbfcf-pdxgp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41b67747784", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:50.322688 containerd[1584]: 2026-04-14 01:11:50.267 [INFO][5783] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Apr 14 01:11:50.322688 containerd[1584]: 2026-04-14 01:11:50.267 [INFO][5783] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" iface="eth0" netns="" Apr 14 01:11:50.322688 containerd[1584]: 2026-04-14 01:11:50.267 [INFO][5783] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Apr 14 01:11:50.322688 containerd[1584]: 2026-04-14 01:11:50.267 [INFO][5783] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Apr 14 01:11:50.322688 containerd[1584]: 2026-04-14 01:11:50.302 [INFO][5791] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" HandleID="k8s-pod-network.806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Workload="localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0" Apr 14 01:11:50.322688 containerd[1584]: 2026-04-14 01:11:50.302 [INFO][5791] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:50.322688 containerd[1584]: 2026-04-14 01:11:50.302 [INFO][5791] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:50.322688 containerd[1584]: 2026-04-14 01:11:50.313 [WARNING][5791] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" HandleID="k8s-pod-network.806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Workload="localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0" Apr 14 01:11:50.322688 containerd[1584]: 2026-04-14 01:11:50.313 [INFO][5791] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" HandleID="k8s-pod-network.806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Workload="localhost-k8s-coredns--674b8bbfcf--pdxgp-eth0" Apr 14 01:11:50.322688 containerd[1584]: 2026-04-14 01:11:50.317 [INFO][5791] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:50.322688 containerd[1584]: 2026-04-14 01:11:50.319 [INFO][5783] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab" Apr 14 01:11:50.323949 containerd[1584]: time="2026-04-14T01:11:50.322794448Z" level=info msg="TearDown network for sandbox \"806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab\" successfully" Apr 14 01:11:50.376462 containerd[1584]: time="2026-04-14T01:11:50.376249947Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 01:11:50.376702 containerd[1584]: time="2026-04-14T01:11:50.376667075Z" level=info msg="RemovePodSandbox \"806fe7fff77bae2fb85c56aee91b35417cdcfd3ed129b5727326583193d328ab\" returns successfully" Apr 14 01:11:50.389204 containerd[1584]: time="2026-04-14T01:11:50.389104668Z" level=info msg="StopPodSandbox for \"6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d\"" Apr 14 01:11:50.515252 containerd[1584]: 2026-04-14 01:11:50.448 [WARNING][5810] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" WorkloadEndpoint="localhost-k8s-whisker--cf6f98489--m9nv8-eth0" Apr 14 01:11:50.515252 containerd[1584]: 2026-04-14 01:11:50.448 [INFO][5810] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Apr 14 01:11:50.515252 containerd[1584]: 2026-04-14 01:11:50.448 [INFO][5810] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" iface="eth0" netns="" Apr 14 01:11:50.515252 containerd[1584]: 2026-04-14 01:11:50.448 [INFO][5810] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Apr 14 01:11:50.515252 containerd[1584]: 2026-04-14 01:11:50.448 [INFO][5810] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Apr 14 01:11:50.515252 containerd[1584]: 2026-04-14 01:11:50.480 [INFO][5818] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" HandleID="k8s-pod-network.6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Workload="localhost-k8s-whisker--cf6f98489--m9nv8-eth0" Apr 14 01:11:50.515252 containerd[1584]: 2026-04-14 01:11:50.481 [INFO][5818] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:50.515252 containerd[1584]: 2026-04-14 01:11:50.481 [INFO][5818] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:50.515252 containerd[1584]: 2026-04-14 01:11:50.501 [WARNING][5818] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" HandleID="k8s-pod-network.6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Workload="localhost-k8s-whisker--cf6f98489--m9nv8-eth0" Apr 14 01:11:50.515252 containerd[1584]: 2026-04-14 01:11:50.501 [INFO][5818] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" HandleID="k8s-pod-network.6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Workload="localhost-k8s-whisker--cf6f98489--m9nv8-eth0" Apr 14 01:11:50.515252 containerd[1584]: 2026-04-14 01:11:50.507 [INFO][5818] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:50.515252 containerd[1584]: 2026-04-14 01:11:50.510 [INFO][5810] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Apr 14 01:11:50.517691 containerd[1584]: time="2026-04-14T01:11:50.515905048Z" level=info msg="TearDown network for sandbox \"6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d\" successfully" Apr 14 01:11:50.517927 containerd[1584]: time="2026-04-14T01:11:50.517742827Z" level=info msg="StopPodSandbox for \"6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d\" returns successfully" Apr 14 01:11:50.519612 containerd[1584]: time="2026-04-14T01:11:50.519494787Z" level=info msg="RemovePodSandbox for \"6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d\"" Apr 14 01:11:50.519612 containerd[1584]: time="2026-04-14T01:11:50.519664554Z" level=info msg="Forcibly stopping sandbox \"6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d\"" Apr 14 01:11:50.704012 sshd[5748]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:50.713717 containerd[1584]: 2026-04-14 01:11:50.601 [WARNING][5837] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" WorkloadEndpoint="localhost-k8s-whisker--cf6f98489--m9nv8-eth0" Apr 14 01:11:50.713717 containerd[1584]: 2026-04-14 01:11:50.603 [INFO][5837] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Apr 14 01:11:50.713717 containerd[1584]: 2026-04-14 01:11:50.603 [INFO][5837] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" iface="eth0" netns="" Apr 14 01:11:50.713717 containerd[1584]: 2026-04-14 01:11:50.603 [INFO][5837] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Apr 14 01:11:50.713717 containerd[1584]: 2026-04-14 01:11:50.603 [INFO][5837] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Apr 14 01:11:50.713717 containerd[1584]: 2026-04-14 01:11:50.650 [INFO][5852] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" HandleID="k8s-pod-network.6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Workload="localhost-k8s-whisker--cf6f98489--m9nv8-eth0" Apr 14 01:11:50.713717 containerd[1584]: 2026-04-14 01:11:50.651 [INFO][5852] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:50.713717 containerd[1584]: 2026-04-14 01:11:50.651 [INFO][5852] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:50.713717 containerd[1584]: 2026-04-14 01:11:50.677 [WARNING][5852] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" HandleID="k8s-pod-network.6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Workload="localhost-k8s-whisker--cf6f98489--m9nv8-eth0" Apr 14 01:11:50.713717 containerd[1584]: 2026-04-14 01:11:50.677 [INFO][5852] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" HandleID="k8s-pod-network.6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Workload="localhost-k8s-whisker--cf6f98489--m9nv8-eth0" Apr 14 01:11:50.713717 containerd[1584]: 2026-04-14 01:11:50.691 [INFO][5852] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:50.713717 containerd[1584]: 2026-04-14 01:11:50.709 [INFO][5837] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d" Apr 14 01:11:50.713717 containerd[1584]: time="2026-04-14T01:11:50.713659879Z" level=info msg="TearDown network for sandbox \"6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d\" successfully" Apr 14 01:11:50.719069 systemd[1]: Started sshd@17-10.0.0.8:22-10.0.0.1:52466.service - OpenSSH per-connection server daemon (10.0.0.1:52466). Apr 14 01:11:50.730212 containerd[1584]: time="2026-04-14T01:11:50.729469449Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 01:11:50.730212 containerd[1584]: time="2026-04-14T01:11:50.729981295Z" level=info msg="RemovePodSandbox \"6b384e318b273d397bcef6f4bfe347e11ff91e579a1f74231b2151d0addbe80d\" returns successfully" Apr 14 01:11:50.732904 containerd[1584]: time="2026-04-14T01:11:50.731157464Z" level=info msg="StopPodSandbox for \"b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630\"" Apr 14 01:11:50.735930 systemd[1]: sshd@16-10.0.0.8:22-10.0.0.1:52460.service: Deactivated successfully. Apr 14 01:11:50.743287 systemd[1]: session-17.scope: Deactivated successfully. Apr 14 01:11:50.750729 systemd-logind[1568]: Session 17 logged out. Waiting for processes to exit. Apr 14 01:11:50.753184 systemd-logind[1568]: Removed session 17. Apr 14 01:11:50.791854 sshd[5860]: Accepted publickey for core from 10.0.0.1 port 52466 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:50.790552 sshd[5860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:50.798141 systemd-logind[1568]: New session 18 of user core. Apr 14 01:11:50.806983 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 14 01:11:50.854550 containerd[1584]: 2026-04-14 01:11:50.802 [WARNING][5873] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0", GenerateName:"calico-apiserver-858fc86974-", Namespace:"calico-system", SelfLink:"", UID:"be5517b6-6d7c-4af7-8c09-ffaa013ba114", ResourceVersion:"1178", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"858fc86974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9", Pod:"calico-apiserver-858fc86974-6fm4q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia5e40ebc88d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:50.854550 containerd[1584]: 2026-04-14 01:11:50.803 [INFO][5873] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Apr 14 01:11:50.854550 containerd[1584]: 2026-04-14 01:11:50.803 [INFO][5873] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" iface="eth0" netns="" Apr 14 01:11:50.854550 containerd[1584]: 2026-04-14 01:11:50.803 [INFO][5873] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Apr 14 01:11:50.854550 containerd[1584]: 2026-04-14 01:11:50.803 [INFO][5873] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Apr 14 01:11:50.854550 containerd[1584]: 2026-04-14 01:11:50.837 [INFO][5882] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" HandleID="k8s-pod-network.b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Workload="localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0" Apr 14 01:11:50.854550 containerd[1584]: 2026-04-14 01:11:50.838 [INFO][5882] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:50.854550 containerd[1584]: 2026-04-14 01:11:50.838 [INFO][5882] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:50.854550 containerd[1584]: 2026-04-14 01:11:50.847 [WARNING][5882] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" HandleID="k8s-pod-network.b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Workload="localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0" Apr 14 01:11:50.854550 containerd[1584]: 2026-04-14 01:11:50.847 [INFO][5882] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" HandleID="k8s-pod-network.b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Workload="localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0" Apr 14 01:11:50.854550 containerd[1584]: 2026-04-14 01:11:50.850 [INFO][5882] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:50.854550 containerd[1584]: 2026-04-14 01:11:50.852 [INFO][5873] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Apr 14 01:11:50.854990 containerd[1584]: time="2026-04-14T01:11:50.854659887Z" level=info msg="TearDown network for sandbox \"b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630\" successfully" Apr 14 01:11:50.854990 containerd[1584]: time="2026-04-14T01:11:50.854684049Z" level=info msg="StopPodSandbox for \"b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630\" returns successfully" Apr 14 01:11:50.855218 containerd[1584]: time="2026-04-14T01:11:50.855184207Z" level=info msg="RemovePodSandbox for \"b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630\"" Apr 14 01:11:50.855218 containerd[1584]: time="2026-04-14T01:11:50.855208154Z" level=info msg="Forcibly stopping sandbox \"b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630\"" Apr 14 01:11:50.978416 containerd[1584]: 2026-04-14 01:11:50.923 [WARNING][5911] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0", GenerateName:"calico-apiserver-858fc86974-", Namespace:"calico-system", SelfLink:"", UID:"be5517b6-6d7c-4af7-8c09-ffaa013ba114", ResourceVersion:"1178", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"858fc86974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"83d52f99bc98a12bdbda3b400c9d506c7d7d312effcdbb0f86fa44667c7331f9", Pod:"calico-apiserver-858fc86974-6fm4q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia5e40ebc88d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:50.978416 containerd[1584]: 2026-04-14 01:11:50.923 [INFO][5911] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Apr 14 01:11:50.978416 containerd[1584]: 2026-04-14 01:11:50.923 [INFO][5911] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" iface="eth0" netns="" Apr 14 01:11:50.978416 containerd[1584]: 2026-04-14 01:11:50.923 [INFO][5911] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Apr 14 01:11:50.978416 containerd[1584]: 2026-04-14 01:11:50.923 [INFO][5911] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Apr 14 01:11:50.978416 containerd[1584]: 2026-04-14 01:11:50.952 [INFO][5927] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" HandleID="k8s-pod-network.b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Workload="localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0" Apr 14 01:11:50.978416 containerd[1584]: 2026-04-14 01:11:50.953 [INFO][5927] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:50.978416 containerd[1584]: 2026-04-14 01:11:50.953 [INFO][5927] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:50.978416 containerd[1584]: 2026-04-14 01:11:50.961 [WARNING][5927] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" HandleID="k8s-pod-network.b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Workload="localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0" Apr 14 01:11:50.978416 containerd[1584]: 2026-04-14 01:11:50.961 [INFO][5927] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" HandleID="k8s-pod-network.b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Workload="localhost-k8s-calico--apiserver--858fc86974--6fm4q-eth0" Apr 14 01:11:50.978416 containerd[1584]: 2026-04-14 01:11:50.967 [INFO][5927] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:50.978416 containerd[1584]: 2026-04-14 01:11:50.973 [INFO][5911] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630" Apr 14 01:11:50.978416 containerd[1584]: time="2026-04-14T01:11:50.977445790Z" level=info msg="TearDown network for sandbox \"b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630\" successfully" Apr 14 01:11:50.982547 sshd[5860]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:50.984630 containerd[1584]: time="2026-04-14T01:11:50.984501834Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 01:11:50.984791 containerd[1584]: time="2026-04-14T01:11:50.984745324Z" level=info msg="RemovePodSandbox \"b953e61f9957cc0be5671aba5855b3a385c3022488a815a48076294919565630\" returns successfully" Apr 14 01:11:50.985757 containerd[1584]: time="2026-04-14T01:11:50.985708538Z" level=info msg="StopPodSandbox for \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\"" Apr 14 01:11:50.987172 systemd-logind[1568]: Session 18 logged out. Waiting for processes to exit. Apr 14 01:11:50.987308 systemd[1]: sshd@17-10.0.0.8:22-10.0.0.1:52466.service: Deactivated successfully. Apr 14 01:11:50.991299 systemd[1]: session-18.scope: Deactivated successfully. Apr 14 01:11:50.994631 systemd-logind[1568]: Removed session 18. Apr 14 01:11:51.166125 containerd[1584]: 2026-04-14 01:11:51.086 [WARNING][5947] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5mzw4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"241647e8-70b4-4fa4-aa60-9aba8555b739", ResourceVersion:"1266", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042", Pod:"csi-node-driver-5mzw4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid42d66c7e60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:51.166125 containerd[1584]: 2026-04-14 01:11:51.086 [INFO][5947] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Apr 14 01:11:51.166125 containerd[1584]: 2026-04-14 01:11:51.086 [INFO][5947] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" iface="eth0" netns="" Apr 14 01:11:51.166125 containerd[1584]: 2026-04-14 01:11:51.086 [INFO][5947] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Apr 14 01:11:51.166125 containerd[1584]: 2026-04-14 01:11:51.086 [INFO][5947] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Apr 14 01:11:51.166125 containerd[1584]: 2026-04-14 01:11:51.138 [INFO][5957] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" HandleID="k8s-pod-network.13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Workload="localhost-k8s-csi--node--driver--5mzw4-eth0" Apr 14 01:11:51.166125 containerd[1584]: 2026-04-14 01:11:51.142 [INFO][5957] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:51.166125 containerd[1584]: 2026-04-14 01:11:51.142 [INFO][5957] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:51.166125 containerd[1584]: 2026-04-14 01:11:51.155 [WARNING][5957] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" HandleID="k8s-pod-network.13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Workload="localhost-k8s-csi--node--driver--5mzw4-eth0" Apr 14 01:11:51.166125 containerd[1584]: 2026-04-14 01:11:51.155 [INFO][5957] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" HandleID="k8s-pod-network.13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Workload="localhost-k8s-csi--node--driver--5mzw4-eth0" Apr 14 01:11:51.166125 containerd[1584]: 2026-04-14 01:11:51.158 [INFO][5957] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:51.166125 containerd[1584]: 2026-04-14 01:11:51.163 [INFO][5947] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Apr 14 01:11:51.169852 containerd[1584]: time="2026-04-14T01:11:51.166203273Z" level=info msg="TearDown network for sandbox \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\" successfully" Apr 14 01:11:51.169852 containerd[1584]: time="2026-04-14T01:11:51.166247581Z" level=info msg="StopPodSandbox for \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\" returns successfully" Apr 14 01:11:51.169852 containerd[1584]: time="2026-04-14T01:11:51.169678362Z" level=info msg="RemovePodSandbox for \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\"" Apr 14 01:11:51.170079 containerd[1584]: time="2026-04-14T01:11:51.170057452Z" level=info msg="Forcibly stopping sandbox \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\"" Apr 14 01:11:51.350427 containerd[1584]: 2026-04-14 01:11:51.300 [WARNING][5974] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5mzw4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"241647e8-70b4-4fa4-aa60-9aba8555b739", ResourceVersion:"1266", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e9a9705fa089312dc1035b793310f7a3c08ab92d8b062b201189a813b91d6042", Pod:"csi-node-driver-5mzw4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid42d66c7e60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:51.350427 containerd[1584]: 2026-04-14 01:11:51.301 [INFO][5974] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Apr 14 01:11:51.350427 containerd[1584]: 2026-04-14 01:11:51.301 [INFO][5974] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" iface="eth0" netns="" Apr 14 01:11:51.350427 containerd[1584]: 2026-04-14 01:11:51.301 [INFO][5974] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Apr 14 01:11:51.350427 containerd[1584]: 2026-04-14 01:11:51.301 [INFO][5974] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Apr 14 01:11:51.350427 containerd[1584]: 2026-04-14 01:11:51.325 [INFO][5983] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" HandleID="k8s-pod-network.13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Workload="localhost-k8s-csi--node--driver--5mzw4-eth0" Apr 14 01:11:51.350427 containerd[1584]: 2026-04-14 01:11:51.326 [INFO][5983] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:51.350427 containerd[1584]: 2026-04-14 01:11:51.326 [INFO][5983] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:51.350427 containerd[1584]: 2026-04-14 01:11:51.339 [WARNING][5983] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" HandleID="k8s-pod-network.13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Workload="localhost-k8s-csi--node--driver--5mzw4-eth0" Apr 14 01:11:51.350427 containerd[1584]: 2026-04-14 01:11:51.339 [INFO][5983] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" HandleID="k8s-pod-network.13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Workload="localhost-k8s-csi--node--driver--5mzw4-eth0" Apr 14 01:11:51.350427 containerd[1584]: 2026-04-14 01:11:51.344 [INFO][5983] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:51.350427 containerd[1584]: 2026-04-14 01:11:51.347 [INFO][5974] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb" Apr 14 01:11:51.351110 containerd[1584]: time="2026-04-14T01:11:51.350598541Z" level=info msg="TearDown network for sandbox \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\" successfully" Apr 14 01:11:51.363455 containerd[1584]: time="2026-04-14T01:11:51.363121007Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 01:11:51.363838 containerd[1584]: time="2026-04-14T01:11:51.363759702Z" level=info msg="RemovePodSandbox \"13cd8e9e7bbaf26bd72bf963713a603a2aeee798a384aa13349aa8cc711b4bfb\" returns successfully" Apr 14 01:11:51.364704 containerd[1584]: time="2026-04-14T01:11:51.364632051Z" level=info msg="StopPodSandbox for \"4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854\"" Apr 14 01:11:51.488405 containerd[1584]: 2026-04-14 01:11:51.430 [WARNING][6001] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0", GenerateName:"calico-apiserver-858fc86974-", Namespace:"calico-system", SelfLink:"", UID:"8cfb8838-233c-4923-a3d1-211c57385c00", ResourceVersion:"1280", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"858fc86974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41", Pod:"calico-apiserver-858fc86974-lgjnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9eeeaded672", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:51.488405 containerd[1584]: 2026-04-14 01:11:51.430 [INFO][6001] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Apr 14 01:11:51.488405 containerd[1584]: 2026-04-14 01:11:51.430 [INFO][6001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" iface="eth0" netns="" Apr 14 01:11:51.488405 containerd[1584]: 2026-04-14 01:11:51.430 [INFO][6001] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Apr 14 01:11:51.488405 containerd[1584]: 2026-04-14 01:11:51.430 [INFO][6001] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Apr 14 01:11:51.488405 containerd[1584]: 2026-04-14 01:11:51.465 [INFO][6009] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" HandleID="k8s-pod-network.4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Workload="localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0" Apr 14 01:11:51.488405 containerd[1584]: 2026-04-14 01:11:51.466 [INFO][6009] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:51.488405 containerd[1584]: 2026-04-14 01:11:51.466 [INFO][6009] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:51.488405 containerd[1584]: 2026-04-14 01:11:51.479 [WARNING][6009] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" HandleID="k8s-pod-network.4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Workload="localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0" Apr 14 01:11:51.488405 containerd[1584]: 2026-04-14 01:11:51.479 [INFO][6009] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" HandleID="k8s-pod-network.4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Workload="localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0" Apr 14 01:11:51.488405 containerd[1584]: 2026-04-14 01:11:51.484 [INFO][6009] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:51.488405 containerd[1584]: 2026-04-14 01:11:51.486 [INFO][6001] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Apr 14 01:11:51.488810 containerd[1584]: time="2026-04-14T01:11:51.488523242Z" level=info msg="TearDown network for sandbox \"4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854\" successfully" Apr 14 01:11:51.488810 containerd[1584]: time="2026-04-14T01:11:51.488552147Z" level=info msg="StopPodSandbox for \"4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854\" returns successfully" Apr 14 01:11:51.489175 containerd[1584]: time="2026-04-14T01:11:51.489109995Z" level=info msg="RemovePodSandbox for \"4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854\"" Apr 14 01:11:51.489175 containerd[1584]: time="2026-04-14T01:11:51.489132600Z" level=info msg="Forcibly stopping sandbox \"4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854\"" Apr 14 01:11:51.596214 containerd[1584]: 2026-04-14 01:11:51.541 [WARNING][6027] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0", GenerateName:"calico-apiserver-858fc86974-", Namespace:"calico-system", SelfLink:"", UID:"8cfb8838-233c-4923-a3d1-211c57385c00", ResourceVersion:"1280", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"858fc86974", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c945e4065b2cf02b9fc41f94a31b3d2add7add845ceba439e28bceb9c106be41", Pod:"calico-apiserver-858fc86974-lgjnc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9eeeaded672", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:51.596214 containerd[1584]: 2026-04-14 01:11:51.542 [INFO][6027] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Apr 14 01:11:51.596214 containerd[1584]: 2026-04-14 01:11:51.542 [INFO][6027] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" iface="eth0" netns="" Apr 14 01:11:51.596214 containerd[1584]: 2026-04-14 01:11:51.542 [INFO][6027] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Apr 14 01:11:51.596214 containerd[1584]: 2026-04-14 01:11:51.542 [INFO][6027] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Apr 14 01:11:51.596214 containerd[1584]: 2026-04-14 01:11:51.579 [INFO][6035] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" HandleID="k8s-pod-network.4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Workload="localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0" Apr 14 01:11:51.596214 containerd[1584]: 2026-04-14 01:11:51.580 [INFO][6035] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:51.596214 containerd[1584]: 2026-04-14 01:11:51.580 [INFO][6035] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:51.596214 containerd[1584]: 2026-04-14 01:11:51.589 [WARNING][6035] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" HandleID="k8s-pod-network.4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Workload="localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0" Apr 14 01:11:51.596214 containerd[1584]: 2026-04-14 01:11:51.589 [INFO][6035] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" HandleID="k8s-pod-network.4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Workload="localhost-k8s-calico--apiserver--858fc86974--lgjnc-eth0" Apr 14 01:11:51.596214 containerd[1584]: 2026-04-14 01:11:51.592 [INFO][6035] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:51.596214 containerd[1584]: 2026-04-14 01:11:51.594 [INFO][6027] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854" Apr 14 01:11:51.596841 containerd[1584]: time="2026-04-14T01:11:51.596410111Z" level=info msg="TearDown network for sandbox \"4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854\" successfully" Apr 14 01:11:51.600167 containerd[1584]: time="2026-04-14T01:11:51.600133570Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 01:11:51.602399 containerd[1584]: time="2026-04-14T01:11:51.600700202Z" level=info msg="RemovePodSandbox \"4468cb443ac4aa10c7c0c42e9d2d083f5f1ed28d0898b9b0a9ec49ac99d53854\" returns successfully" Apr 14 01:11:51.604391 containerd[1584]: time="2026-04-14T01:11:51.604248943Z" level=info msg="StopPodSandbox for \"c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18\"" Apr 14 01:11:51.706621 containerd[1584]: 2026-04-14 01:11:51.659 [WARNING][6053] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--gpjhg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"c6df9bea-e3c6-4c2c-952c-aaf1341b5033", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891", Pod:"goldmane-5b85766d88-gpjhg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4fbea8dda84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:51.706621 containerd[1584]: 2026-04-14 01:11:51.660 [INFO][6053] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Apr 14 01:11:51.706621 containerd[1584]: 2026-04-14 01:11:51.660 [INFO][6053] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" iface="eth0" netns="" Apr 14 01:11:51.706621 containerd[1584]: 2026-04-14 01:11:51.660 [INFO][6053] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Apr 14 01:11:51.706621 containerd[1584]: 2026-04-14 01:11:51.660 [INFO][6053] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Apr 14 01:11:51.706621 containerd[1584]: 2026-04-14 01:11:51.686 [INFO][6062] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" HandleID="k8s-pod-network.c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Workload="localhost-k8s-goldmane--5b85766d88--gpjhg-eth0" Apr 14 01:11:51.706621 containerd[1584]: 2026-04-14 01:11:51.687 [INFO][6062] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:51.706621 containerd[1584]: 2026-04-14 01:11:51.687 [INFO][6062] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:51.706621 containerd[1584]: 2026-04-14 01:11:51.697 [WARNING][6062] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" HandleID="k8s-pod-network.c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Workload="localhost-k8s-goldmane--5b85766d88--gpjhg-eth0" Apr 14 01:11:51.706621 containerd[1584]: 2026-04-14 01:11:51.698 [INFO][6062] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" HandleID="k8s-pod-network.c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Workload="localhost-k8s-goldmane--5b85766d88--gpjhg-eth0" Apr 14 01:11:51.706621 containerd[1584]: 2026-04-14 01:11:51.703 [INFO][6062] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:51.706621 containerd[1584]: 2026-04-14 01:11:51.704 [INFO][6053] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Apr 14 01:11:51.707076 containerd[1584]: time="2026-04-14T01:11:51.706808912Z" level=info msg="TearDown network for sandbox \"c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18\" successfully" Apr 14 01:11:51.707076 containerd[1584]: time="2026-04-14T01:11:51.706836178Z" level=info msg="StopPodSandbox for \"c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18\" returns successfully" Apr 14 01:11:51.707548 containerd[1584]: time="2026-04-14T01:11:51.707484146Z" level=info msg="RemovePodSandbox for \"c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18\"" Apr 14 01:11:51.707548 containerd[1584]: time="2026-04-14T01:11:51.707537277Z" level=info msg="Forcibly stopping sandbox \"c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18\"" Apr 14 01:11:51.817434 containerd[1584]: 2026-04-14 01:11:51.754 [WARNING][6080] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--5b85766d88--gpjhg-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"c6df9bea-e3c6-4c2c-952c-aaf1341b5033", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d27d0b71a85751a555bd600ce1faa67ff923efeaace182cc11e758f0eb1fa891", Pod:"goldmane-5b85766d88-gpjhg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4fbea8dda84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:51.817434 containerd[1584]: 2026-04-14 01:11:51.755 [INFO][6080] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Apr 14 01:11:51.817434 containerd[1584]: 2026-04-14 01:11:51.755 [INFO][6080] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" iface="eth0" netns="" Apr 14 01:11:51.817434 containerd[1584]: 2026-04-14 01:11:51.755 [INFO][6080] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Apr 14 01:11:51.817434 containerd[1584]: 2026-04-14 01:11:51.755 [INFO][6080] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Apr 14 01:11:51.817434 containerd[1584]: 2026-04-14 01:11:51.799 [INFO][6089] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" HandleID="k8s-pod-network.c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Workload="localhost-k8s-goldmane--5b85766d88--gpjhg-eth0" Apr 14 01:11:51.817434 containerd[1584]: 2026-04-14 01:11:51.800 [INFO][6089] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:51.817434 containerd[1584]: 2026-04-14 01:11:51.800 [INFO][6089] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:51.817434 containerd[1584]: 2026-04-14 01:11:51.808 [WARNING][6089] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" HandleID="k8s-pod-network.c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Workload="localhost-k8s-goldmane--5b85766d88--gpjhg-eth0" Apr 14 01:11:51.817434 containerd[1584]: 2026-04-14 01:11:51.808 [INFO][6089] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" HandleID="k8s-pod-network.c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Workload="localhost-k8s-goldmane--5b85766d88--gpjhg-eth0" Apr 14 01:11:51.817434 containerd[1584]: 2026-04-14 01:11:51.812 [INFO][6089] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:51.817434 containerd[1584]: 2026-04-14 01:11:51.815 [INFO][6080] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18" Apr 14 01:11:51.817928 containerd[1584]: time="2026-04-14T01:11:51.817886528Z" level=info msg="TearDown network for sandbox \"c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18\" successfully" Apr 14 01:11:51.821692 containerd[1584]: time="2026-04-14T01:11:51.821640013Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 01:11:51.821875 containerd[1584]: time="2026-04-14T01:11:51.821764492Z" level=info msg="RemovePodSandbox \"c45d31724e76f34921db0729c6cd8c20d7ee3e0b8552fcc5ea53946a7948fd18\" returns successfully" Apr 14 01:11:51.822484 containerd[1584]: time="2026-04-14T01:11:51.822455845Z" level=info msg="StopPodSandbox for \"a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7\"" Apr 14 01:11:51.948253 containerd[1584]: 2026-04-14 01:11:51.877 [WARNING][6106] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d332dd42-09a8-4567-aca9-70ecff2b60fc", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518", Pod:"coredns-674b8bbfcf-t9cp8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali492ccbaaeac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:51.948253 containerd[1584]: 2026-04-14 01:11:51.878 [INFO][6106] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Apr 14 01:11:51.948253 containerd[1584]: 2026-04-14 01:11:51.878 [INFO][6106] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" iface="eth0" netns="" Apr 14 01:11:51.948253 containerd[1584]: 2026-04-14 01:11:51.878 [INFO][6106] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Apr 14 01:11:51.948253 containerd[1584]: 2026-04-14 01:11:51.878 [INFO][6106] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Apr 14 01:11:51.948253 containerd[1584]: 2026-04-14 01:11:51.915 [INFO][6115] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" HandleID="k8s-pod-network.a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Workload="localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0" Apr 14 01:11:51.948253 containerd[1584]: 2026-04-14 01:11:51.918 [INFO][6115] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:51.948253 containerd[1584]: 2026-04-14 01:11:51.919 [INFO][6115] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:51.948253 containerd[1584]: 2026-04-14 01:11:51.933 [WARNING][6115] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" HandleID="k8s-pod-network.a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Workload="localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0" Apr 14 01:11:51.948253 containerd[1584]: 2026-04-14 01:11:51.933 [INFO][6115] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" HandleID="k8s-pod-network.a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Workload="localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0" Apr 14 01:11:51.948253 containerd[1584]: 2026-04-14 01:11:51.938 [INFO][6115] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:51.948253 containerd[1584]: 2026-04-14 01:11:51.943 [INFO][6106] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Apr 14 01:11:51.948878 containerd[1584]: time="2026-04-14T01:11:51.948465463Z" level=info msg="TearDown network for sandbox \"a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7\" successfully" Apr 14 01:11:51.948878 containerd[1584]: time="2026-04-14T01:11:51.948494084Z" level=info msg="StopPodSandbox for \"a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7\" returns successfully" Apr 14 01:11:51.949379 containerd[1584]: time="2026-04-14T01:11:51.949271395Z" level=info msg="RemovePodSandbox for \"a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7\"" Apr 14 01:11:51.949501 containerd[1584]: time="2026-04-14T01:11:51.949465931Z" level=info msg="Forcibly stopping sandbox \"a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7\"" Apr 14 01:11:52.050098 containerd[1584]: 2026-04-14 01:11:51.992 [WARNING][6132] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d332dd42-09a8-4567-aca9-70ecff2b60fc", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 10, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"61bb69f5d2f6e54689667ff5f359dd04121cb9333ed15a54e08d0eae18c31518", Pod:"coredns-674b8bbfcf-t9cp8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali492ccbaaeac", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:52.050098 containerd[1584]: 2026-04-14 01:11:51.993 [INFO][6132] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Apr 14 01:11:52.050098 containerd[1584]: 2026-04-14 01:11:51.993 [INFO][6132] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" iface="eth0" netns="" Apr 14 01:11:52.050098 containerd[1584]: 2026-04-14 01:11:51.993 [INFO][6132] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Apr 14 01:11:52.050098 containerd[1584]: 2026-04-14 01:11:51.993 [INFO][6132] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Apr 14 01:11:52.050098 containerd[1584]: 2026-04-14 01:11:52.033 [INFO][6141] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" HandleID="k8s-pod-network.a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Workload="localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0" Apr 14 01:11:52.050098 containerd[1584]: 2026-04-14 01:11:52.033 [INFO][6141] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:52.050098 containerd[1584]: 2026-04-14 01:11:52.033 [INFO][6141] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:52.050098 containerd[1584]: 2026-04-14 01:11:52.042 [WARNING][6141] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" HandleID="k8s-pod-network.a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Workload="localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0" Apr 14 01:11:52.050098 containerd[1584]: 2026-04-14 01:11:52.042 [INFO][6141] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" HandleID="k8s-pod-network.a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Workload="localhost-k8s-coredns--674b8bbfcf--t9cp8-eth0" Apr 14 01:11:52.050098 containerd[1584]: 2026-04-14 01:11:52.045 [INFO][6141] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:52.050098 containerd[1584]: 2026-04-14 01:11:52.047 [INFO][6132] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7" Apr 14 01:11:52.050901 containerd[1584]: time="2026-04-14T01:11:52.050212112Z" level=info msg="TearDown network for sandbox \"a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7\" successfully" Apr 14 01:11:52.054724 containerd[1584]: time="2026-04-14T01:11:52.054571397Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 01:11:52.054893 containerd[1584]: time="2026-04-14T01:11:52.054750522Z" level=info msg="RemovePodSandbox \"a8c9534adb8f54d13c06424e9397b54ff9151511766b8e5c3528b11c1e93c0b7\" returns successfully" Apr 14 01:11:52.055531 containerd[1584]: time="2026-04-14T01:11:52.055410434Z" level=info msg="StopPodSandbox for \"1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3\"" Apr 14 01:11:52.157401 containerd[1584]: 2026-04-14 01:11:52.104 [WARNING][6158] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0", GenerateName:"calico-kube-controllers-69f7f46b8c-", Namespace:"calico-system", SelfLink:"", UID:"0e971146-3f82-4810-b7e0-9307354ac58e", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69f7f46b8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42", Pod:"calico-kube-controllers-69f7f46b8c-2nl2l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3ad035e267", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:52.157401 containerd[1584]: 2026-04-14 01:11:52.105 [INFO][6158] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Apr 14 01:11:52.157401 containerd[1584]: 2026-04-14 01:11:52.105 [INFO][6158] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" iface="eth0" netns="" Apr 14 01:11:52.157401 containerd[1584]: 2026-04-14 01:11:52.105 [INFO][6158] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Apr 14 01:11:52.157401 containerd[1584]: 2026-04-14 01:11:52.105 [INFO][6158] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Apr 14 01:11:52.157401 containerd[1584]: 2026-04-14 01:11:52.139 [INFO][6167] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" HandleID="k8s-pod-network.1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Workload="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0" Apr 14 01:11:52.157401 containerd[1584]: 2026-04-14 01:11:52.139 [INFO][6167] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:52.157401 containerd[1584]: 2026-04-14 01:11:52.139 [INFO][6167] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:52.157401 containerd[1584]: 2026-04-14 01:11:52.150 [WARNING][6167] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" HandleID="k8s-pod-network.1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Workload="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0" Apr 14 01:11:52.157401 containerd[1584]: 2026-04-14 01:11:52.150 [INFO][6167] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" HandleID="k8s-pod-network.1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Workload="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0" Apr 14 01:11:52.157401 containerd[1584]: 2026-04-14 01:11:52.153 [INFO][6167] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:52.157401 containerd[1584]: 2026-04-14 01:11:52.155 [INFO][6158] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Apr 14 01:11:52.158221 containerd[1584]: time="2026-04-14T01:11:52.157461288Z" level=info msg="TearDown network for sandbox \"1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3\" successfully" Apr 14 01:11:52.158221 containerd[1584]: time="2026-04-14T01:11:52.157485739Z" level=info msg="StopPodSandbox for \"1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3\" returns successfully" Apr 14 01:11:52.158221 containerd[1584]: time="2026-04-14T01:11:52.157956721Z" level=info msg="RemovePodSandbox for \"1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3\"" Apr 14 01:11:52.158221 containerd[1584]: time="2026-04-14T01:11:52.157981252Z" level=info msg="Forcibly stopping sandbox \"1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3\"" Apr 14 01:11:52.260533 containerd[1584]: 2026-04-14 01:11:52.204 [WARNING][6183] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0", GenerateName:"calico-kube-controllers-69f7f46b8c-", Namespace:"calico-system", SelfLink:"", UID:"0e971146-3f82-4810-b7e0-9307354ac58e", ResourceVersion:"1125", Generation:0, CreationTimestamp:time.Date(2026, time.April, 14, 1, 11, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69f7f46b8c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"108921f23351bfd98aeda2153b89e9bccf0b73462a7e7b7655d8542575991c42", Pod:"calico-kube-controllers-69f7f46b8c-2nl2l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia3ad035e267", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 14 01:11:52.260533 containerd[1584]: 2026-04-14 01:11:52.205 [INFO][6183] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Apr 14 01:11:52.260533 containerd[1584]: 2026-04-14 01:11:52.205 [INFO][6183] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" iface="eth0" netns="" Apr 14 01:11:52.260533 containerd[1584]: 2026-04-14 01:11:52.205 [INFO][6183] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Apr 14 01:11:52.260533 containerd[1584]: 2026-04-14 01:11:52.205 [INFO][6183] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Apr 14 01:11:52.260533 containerd[1584]: 2026-04-14 01:11:52.241 [INFO][6191] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" HandleID="k8s-pod-network.1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Workload="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0" Apr 14 01:11:52.260533 containerd[1584]: 2026-04-14 01:11:52.242 [INFO][6191] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 14 01:11:52.260533 containerd[1584]: 2026-04-14 01:11:52.242 [INFO][6191] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 14 01:11:52.260533 containerd[1584]: 2026-04-14 01:11:52.252 [WARNING][6191] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" HandleID="k8s-pod-network.1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Workload="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0" Apr 14 01:11:52.260533 containerd[1584]: 2026-04-14 01:11:52.252 [INFO][6191] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" HandleID="k8s-pod-network.1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Workload="localhost-k8s-calico--kube--controllers--69f7f46b8c--2nl2l-eth0" Apr 14 01:11:52.260533 containerd[1584]: 2026-04-14 01:11:52.256 [INFO][6191] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 14 01:11:52.260533 containerd[1584]: 2026-04-14 01:11:52.258 [INFO][6183] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3" Apr 14 01:11:52.261248 containerd[1584]: time="2026-04-14T01:11:52.260786741Z" level=info msg="TearDown network for sandbox \"1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3\" successfully" Apr 14 01:11:52.266615 containerd[1584]: time="2026-04-14T01:11:52.266530887Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 14 01:11:52.266789 containerd[1584]: time="2026-04-14T01:11:52.266675655Z" level=info msg="RemovePodSandbox \"1396ed3ffb22edda45fb84b609899a0aefbbe5308139d522c7483e42f587a2a3\" returns successfully" Apr 14 01:11:55.997857 systemd[1]: Started sshd@18-10.0.0.8:22-10.0.0.1:49472.service - OpenSSH per-connection server daemon (10.0.0.1:49472). Apr 14 01:11:56.035521 sshd[6200]: Accepted publickey for core from 10.0.0.1 port 49472 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:11:56.037312 sshd[6200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:11:56.048244 systemd-logind[1568]: New session 19 of user core. Apr 14 01:11:56.062584 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 14 01:11:56.267512 sshd[6200]: pam_unix(sshd:session): session closed for user core Apr 14 01:11:56.277407 systemd[1]: sshd@18-10.0.0.8:22-10.0.0.1:49472.service: Deactivated successfully. Apr 14 01:11:56.279863 systemd[1]: session-19.scope: Deactivated successfully. Apr 14 01:11:56.280829 systemd-logind[1568]: Session 19 logged out. Waiting for processes to exit. Apr 14 01:11:56.282764 systemd-logind[1568]: Removed session 19. Apr 14 01:12:01.312805 systemd[1]: Started sshd@19-10.0.0.8:22-10.0.0.1:49488.service - OpenSSH per-connection server daemon (10.0.0.1:49488). Apr 14 01:12:01.610033 sshd[6267]: Accepted publickey for core from 10.0.0.1 port 49488 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:12:01.609290 sshd[6267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:12:01.629525 systemd-logind[1568]: New session 20 of user core. Apr 14 01:12:01.636795 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 14 01:12:02.136140 sshd[6267]: pam_unix(sshd:session): session closed for user core Apr 14 01:12:02.146146 systemd[1]: sshd@19-10.0.0.8:22-10.0.0.1:49488.service: Deactivated successfully. Apr 14 01:12:02.158100 systemd-logind[1568]: Session 20 logged out. Waiting for processes to exit. Apr 14 01:12:02.162554 systemd[1]: session-20.scope: Deactivated successfully. Apr 14 01:12:02.168855 systemd-logind[1568]: Removed session 20. Apr 14 01:12:07.158430 systemd[1]: Started sshd@20-10.0.0.8:22-10.0.0.1:49380.service - OpenSSH per-connection server daemon (10.0.0.1:49380). Apr 14 01:12:07.316790 sshd[6308]: Accepted publickey for core from 10.0.0.1 port 49380 ssh2: RSA SHA256:zM2obmGL5oGyn8Z+rg+lBADX8Aw1wc1EM+eCvx6I1cU Apr 14 01:12:07.321410 sshd[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 14 01:12:07.341982 systemd-logind[1568]: New session 21 of user core. Apr 14 01:12:07.361727 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 14 01:12:07.946098 sshd[6308]: pam_unix(sshd:session): session closed for user core Apr 14 01:12:07.955868 systemd-logind[1568]: Session 21 logged out. Waiting for processes to exit. Apr 14 01:12:07.959376 systemd[1]: sshd@20-10.0.0.8:22-10.0.0.1:49380.service: Deactivated successfully. Apr 14 01:12:07.980060 systemd[1]: session-21.scope: Deactivated successfully. Apr 14 01:12:07.989065 systemd-logind[1568]: Removed session 21.