Apr 17 23:36:41.944287 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Apr 17 22:11:20 -00 2026 Apr 17 23:36:41.944325 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:36:41.944344 kernel: BIOS-provided physical RAM map: Apr 17 23:36:41.944356 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 17 23:36:41.944367 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 17 23:36:41.944378 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 17 23:36:41.944392 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 17 23:36:41.944404 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 17 23:36:41.944416 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 17 23:36:41.944431 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 17 23:36:41.944443 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 17 23:36:41.944455 kernel: NX (Execute Disable) protection: active Apr 17 23:36:41.944467 kernel: APIC: Static calls initialized Apr 17 23:36:41.944479 kernel: efi: EFI v2.7 by EDK II Apr 17 23:36:41.944495 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x7701a018 Apr 17 23:36:41.944512 kernel: SMBIOS 2.7 present. Apr 17 23:36:41.944525 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 17 23:36:41.944537 kernel: Hypervisor detected: KVM Apr 17 23:36:41.944551 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 17 23:36:41.944563 kernel: kvm-clock: using sched offset of 4087015421 cycles Apr 17 23:36:41.944575 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 17 23:36:41.944588 kernel: tsc: Detected 2499.996 MHz processor Apr 17 23:36:41.944601 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 17 23:36:41.944615 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 17 23:36:41.944628 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 17 23:36:41.944646 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 17 23:36:41.944661 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 17 23:36:41.944675 kernel: Using GB pages for direct mapping Apr 17 23:36:41.944690 kernel: Secure boot disabled Apr 17 23:36:41.944705 kernel: ACPI: Early table checksum verification disabled Apr 17 23:36:41.944719 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 17 23:36:41.944734 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 17 23:36:41.944750 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 17 23:36:41.944764 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 17 23:36:41.944781 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 17 23:36:41.944795 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 17 23:36:41.944809 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 17 23:36:41.944823 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 17 23:36:41.944837 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 17 23:36:41.944853 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 17 23:36:41.944873 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 17 23:36:41.944889 kernel: ACPI: SSDT 0x0000000078952000 0000D1 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 17 23:36:41.944904 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 17 23:36:41.944933 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 17 23:36:41.944947 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 17 23:36:41.944959 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 17 23:36:41.944972 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 17 23:36:41.944986 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 17 23:36:41.945004 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 17 23:36:41.945020 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 17 23:36:41.945034 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 17 23:36:41.945048 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 17 23:36:41.945062 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x789520d0] Apr 17 23:36:41.945078 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 17 23:36:41.945092 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 17 23:36:41.945106 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 17 23:36:41.945121 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 17 23:36:41.945138 kernel: NUMA: Initialized distance table, cnt=1 Apr 17 23:36:41.945153 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Apr 17 23:36:41.945169 kernel: Zone ranges: Apr 17 23:36:41.945186 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 17 23:36:41.945202 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 17 23:36:41.945218 kernel: Normal empty Apr 17 23:36:41.945234 kernel: Movable zone start for each node Apr 17 23:36:41.945249 kernel: Early memory node ranges Apr 17 23:36:41.945265 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 17 23:36:41.945284 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 17 23:36:41.945300 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 17 23:36:41.945315 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 17 23:36:41.945331 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 17 23:36:41.945347 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 17 23:36:41.945364 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 17 23:36:41.945380 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 17 23:36:41.945396 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 17 23:36:41.945412 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 17 23:36:41.945428 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 17 23:36:41.945448 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 17 23:36:41.945463 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 17 23:36:41.945478 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 17 23:36:41.945494 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 17 23:36:41.945509 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 17 23:36:41.945524 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 17 23:36:41.945539 kernel: TSC deadline timer available Apr 17 23:36:41.945554 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 17 23:36:41.945569 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 17 23:36:41.945588 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 17 23:36:41.945602 kernel: Booting paravirtualized kernel on KVM Apr 17 23:36:41.945616 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 17 23:36:41.945630 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 17 23:36:41.945644 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 17 23:36:41.945658 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 17 23:36:41.945672 kernel: pcpu-alloc: [0] 0 1 Apr 17 23:36:41.945686 kernel: kvm-guest: PV spinlocks enabled Apr 17 23:36:41.945702 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 17 23:36:41.945725 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:36:41.945751 kernel: random: crng init done Apr 17 23:36:41.945765 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 17 23:36:41.945781 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 17 23:36:41.945797 kernel: Fallback order for Node 0: 0 Apr 17 23:36:41.945813 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 17 23:36:41.945828 kernel: Policy zone: DMA32 Apr 17 23:36:41.945844 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 17 23:36:41.945863 kernel: Memory: 1874644K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162900K reserved, 0K cma-reserved) Apr 17 23:36:41.945878 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 17 23:36:41.945892 kernel: Kernel/User page tables isolation: enabled Apr 17 23:36:41.945905 kernel: ftrace: allocating 37996 entries in 149 pages Apr 17 23:36:41.947952 kernel: ftrace: allocated 149 pages with 4 groups Apr 17 23:36:41.947975 kernel: Dynamic Preempt: voluntary Apr 17 23:36:41.947988 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 17 23:36:41.948003 kernel: rcu: RCU event tracing is enabled. Apr 17 23:36:41.948016 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 17 23:36:41.948035 kernel: Trampoline variant of Tasks RCU enabled. Apr 17 23:36:41.948048 kernel: Rude variant of Tasks RCU enabled. Apr 17 23:36:41.948062 kernel: Tracing variant of Tasks RCU enabled. Apr 17 23:36:41.948075 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 17 23:36:41.948088 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 17 23:36:41.948101 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 17 23:36:41.948115 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 17 23:36:41.948143 kernel: Console: colour dummy device 80x25 Apr 17 23:36:41.948158 kernel: printk: console [tty0] enabled Apr 17 23:36:41.948171 kernel: printk: console [ttyS0] enabled Apr 17 23:36:41.948186 kernel: ACPI: Core revision 20230628 Apr 17 23:36:41.948200 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 17 23:36:41.948217 kernel: APIC: Switch to symmetric I/O mode setup Apr 17 23:36:41.948231 kernel: x2apic enabled Apr 17 23:36:41.948245 kernel: APIC: Switched APIC routing to: physical x2apic Apr 17 23:36:41.948260 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 17 23:36:41.948274 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Apr 17 23:36:41.948292 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 17 23:36:41.948306 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 17 23:36:41.948320 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 17 23:36:41.948333 kernel: Spectre V2 : Mitigation: Retpolines Apr 17 23:36:41.948347 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 17 23:36:41.948360 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 17 23:36:41.948375 kernel: RETBleed: Vulnerable Apr 17 23:36:41.948388 kernel: Speculative Store Bypass: Vulnerable Apr 17 23:36:41.948403 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:36:41.948417 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 17 23:36:41.948434 kernel: GDS: Unknown: Dependent on hypervisor status Apr 17 23:36:41.948448 kernel: active return thunk: its_return_thunk Apr 17 23:36:41.948462 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 17 23:36:41.948476 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 17 23:36:41.948490 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 17 23:36:41.948504 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 17 23:36:41.948518 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 17 23:36:41.948532 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 17 23:36:41.948546 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 17 23:36:41.948560 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 17 23:36:41.948573 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 17 23:36:41.948589 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 17 23:36:41.948604 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 17 23:36:41.948618 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 17 23:36:41.948632 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 17 23:36:41.948646 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 17 23:36:41.948659 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 17 23:36:41.948673 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 17 23:36:41.948687 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 17 23:36:41.948701 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 17 23:36:41.948715 kernel: Freeing SMP alternatives memory: 32K Apr 17 23:36:41.948728 kernel: pid_max: default: 32768 minimum: 301 Apr 17 23:36:41.948745 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 17 23:36:41.948759 kernel: landlock: Up and running. Apr 17 23:36:41.948773 kernel: SELinux: Initializing. Apr 17 23:36:41.948787 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 17 23:36:41.948801 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 17 23:36:41.948814 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 17 23:36:41.948828 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:36:41.948843 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:36:41.948857 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 17 23:36:41.948871 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 17 23:36:41.948889 kernel: signal: max sigframe size: 3632 Apr 17 23:36:41.948903 kernel: rcu: Hierarchical SRCU implementation. Apr 17 23:36:41.948928 kernel: rcu: Max phase no-delay instances is 400. Apr 17 23:36:41.948942 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 17 23:36:41.948956 kernel: smp: Bringing up secondary CPUs ... Apr 17 23:36:41.948970 kernel: smpboot: x86: Booting SMP configuration: Apr 17 23:36:41.948984 kernel: .... node #0, CPUs: #1 Apr 17 23:36:41.948999 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 17 23:36:41.949014 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 17 23:36:41.949032 kernel: smp: Brought up 1 node, 2 CPUs Apr 17 23:36:41.949046 kernel: smpboot: Max logical packages: 1 Apr 17 23:36:41.949061 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Apr 17 23:36:41.949075 kernel: devtmpfs: initialized Apr 17 23:36:41.949090 kernel: x86/mm: Memory block size: 128MB Apr 17 23:36:41.949104 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 17 23:36:41.949119 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 17 23:36:41.949134 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 17 23:36:41.949149 kernel: pinctrl core: initialized pinctrl subsystem Apr 17 23:36:41.949167 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 17 23:36:41.949182 kernel: audit: initializing netlink subsys (disabled) Apr 17 23:36:41.949198 kernel: audit: type=2000 audit(1776469001.645:1): state=initialized audit_enabled=0 res=1 Apr 17 23:36:41.949212 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 17 23:36:41.949228 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 17 23:36:41.949243 kernel: cpuidle: using governor menu Apr 17 23:36:41.949258 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 17 23:36:41.949274 kernel: dca service started, version 1.12.1 Apr 17 23:36:41.949289 kernel: PCI: Using configuration type 1 for base access Apr 17 23:36:41.949309 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 17 23:36:41.949325 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 17 23:36:41.949340 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 17 23:36:41.949356 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 17 23:36:41.949372 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 17 23:36:41.949387 kernel: ACPI: Added _OSI(Module Device) Apr 17 23:36:41.949402 kernel: ACPI: Added _OSI(Processor Device) Apr 17 23:36:41.949419 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 17 23:36:41.949435 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 17 23:36:41.949454 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 17 23:36:41.949470 kernel: ACPI: Interpreter enabled Apr 17 23:36:41.949485 kernel: ACPI: PM: (supports S0 S5) Apr 17 23:36:41.949501 kernel: ACPI: Using IOAPIC for interrupt routing Apr 17 23:36:41.949517 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 17 23:36:41.949533 kernel: PCI: Using E820 reservations for host bridge windows Apr 17 23:36:41.949549 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 17 23:36:41.949565 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 17 23:36:41.949794 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 17 23:36:41.952035 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 17 23:36:41.952214 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 17 23:36:41.952237 kernel: acpiphp: Slot [3] registered Apr 17 23:36:41.952256 kernel: acpiphp: Slot [4] registered Apr 17 23:36:41.952273 kernel: acpiphp: Slot [5] registered Apr 17 23:36:41.952289 kernel: acpiphp: Slot [6] registered Apr 17 23:36:41.952307 kernel: acpiphp: Slot [7] registered Apr 17 23:36:41.952331 kernel: acpiphp: Slot [8] registered Apr 17 23:36:41.952348 kernel: acpiphp: Slot [9] registered Apr 17 23:36:41.952365 kernel: acpiphp: Slot [10] registered Apr 17 23:36:41.952382 kernel: acpiphp: Slot [11] registered Apr 17 23:36:41.952400 kernel: acpiphp: Slot [12] registered Apr 17 23:36:41.952418 kernel: acpiphp: Slot [13] registered Apr 17 23:36:41.952436 kernel: acpiphp: Slot [14] registered Apr 17 23:36:41.952453 kernel: acpiphp: Slot [15] registered Apr 17 23:36:41.952471 kernel: acpiphp: Slot [16] registered Apr 17 23:36:41.952489 kernel: acpiphp: Slot [17] registered Apr 17 23:36:41.952509 kernel: acpiphp: Slot [18] registered Apr 17 23:36:41.952526 kernel: acpiphp: Slot [19] registered Apr 17 23:36:41.952543 kernel: acpiphp: Slot [20] registered Apr 17 23:36:41.952559 kernel: acpiphp: Slot [21] registered Apr 17 23:36:41.952576 kernel: acpiphp: Slot [22] registered Apr 17 23:36:41.952593 kernel: acpiphp: Slot [23] registered Apr 17 23:36:41.952609 kernel: acpiphp: Slot [24] registered Apr 17 23:36:41.952625 kernel: acpiphp: Slot [25] registered Apr 17 23:36:41.952642 kernel: acpiphp: Slot [26] registered Apr 17 23:36:41.952662 kernel: acpiphp: Slot [27] registered Apr 17 23:36:41.952678 kernel: acpiphp: Slot [28] registered Apr 17 23:36:41.952695 kernel: acpiphp: Slot [29] registered Apr 17 23:36:41.952712 kernel: acpiphp: Slot [30] registered Apr 17 23:36:41.952728 kernel: acpiphp: Slot [31] registered Apr 17 23:36:41.952746 kernel: PCI host bridge to bus 0000:00 Apr 17 23:36:41.952896 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 17 23:36:41.953043 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 17 23:36:41.953173 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 17 23:36:41.953296 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 17 23:36:41.953418 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 17 23:36:41.953540 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 17 23:36:41.953708 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 17 23:36:41.953869 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 17 23:36:41.956090 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 17 23:36:41.956258 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 17 23:36:41.956403 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 17 23:36:41.956542 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 17 23:36:41.956678 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 17 23:36:41.956808 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 17 23:36:41.956957 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 17 23:36:41.957090 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 17 23:36:41.957235 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 17 23:36:41.957364 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 17 23:36:41.957496 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 17 23:36:41.957629 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 17 23:36:41.957764 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 17 23:36:41.957901 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 17 23:36:41.960146 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 17 23:36:41.960335 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 17 23:36:41.960508 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 17 23:36:41.960534 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 17 23:36:41.960553 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 17 23:36:41.960572 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 17 23:36:41.960589 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 17 23:36:41.960605 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 17 23:36:41.960627 kernel: iommu: Default domain type: Translated Apr 17 23:36:41.960644 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 17 23:36:41.960660 kernel: efivars: Registered efivars operations Apr 17 23:36:41.960675 kernel: PCI: Using ACPI for IRQ routing Apr 17 23:36:41.960691 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 17 23:36:41.960705 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 17 23:36:41.960718 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 17 23:36:41.966208 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 17 23:36:41.966409 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 17 23:36:41.966567 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 17 23:36:41.966588 kernel: vgaarb: loaded Apr 17 23:36:41.966605 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 17 23:36:41.966621 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 17 23:36:41.966637 kernel: clocksource: Switched to clocksource kvm-clock Apr 17 23:36:41.966653 kernel: VFS: Disk quotas dquot_6.6.0 Apr 17 23:36:41.966670 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 17 23:36:41.966686 kernel: pnp: PnP ACPI init Apr 17 23:36:41.966707 kernel: pnp: PnP ACPI: found 5 devices Apr 17 23:36:41.966724 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 17 23:36:41.966739 kernel: NET: Registered PF_INET protocol family Apr 17 23:36:41.966752 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 17 23:36:41.966766 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 17 23:36:41.966781 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 17 23:36:41.966795 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 17 23:36:41.966810 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 17 23:36:41.966825 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 17 23:36:41.966843 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 17 23:36:41.966857 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 17 23:36:41.966872 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 17 23:36:41.966886 kernel: NET: Registered PF_XDP protocol family Apr 17 23:36:41.967058 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 17 23:36:41.967181 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 17 23:36:41.967297 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 17 23:36:41.967416 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 17 23:36:41.967533 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 17 23:36:41.967675 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 17 23:36:41.967693 kernel: PCI: CLS 0 bytes, default 64 Apr 17 23:36:41.967709 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 17 23:36:41.967724 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Apr 17 23:36:41.967738 kernel: clocksource: Switched to clocksource tsc Apr 17 23:36:41.967754 kernel: Initialise system trusted keyrings Apr 17 23:36:41.967768 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 17 23:36:41.967782 kernel: Key type asymmetric registered Apr 17 23:36:41.967799 kernel: Asymmetric key parser 'x509' registered Apr 17 23:36:41.967813 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 17 23:36:41.967828 kernel: io scheduler mq-deadline registered Apr 17 23:36:41.967842 kernel: io scheduler kyber registered Apr 17 23:36:41.967857 kernel: io scheduler bfq registered Apr 17 23:36:41.967871 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 17 23:36:41.967885 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 17 23:36:41.967901 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 17 23:36:41.967927 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 17 23:36:41.967944 kernel: i8042: Warning: Keylock active Apr 17 23:36:41.967958 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 17 23:36:41.967972 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 17 23:36:41.968118 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 17 23:36:41.968241 kernel: rtc_cmos 00:00: registered as rtc0 Apr 17 23:36:41.968365 kernel: rtc_cmos 00:00: setting system clock to 2026-04-17T23:36:41 UTC (1776469001) Apr 17 23:36:41.968485 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 17 23:36:41.968502 kernel: intel_pstate: CPU model not supported Apr 17 23:36:41.968520 kernel: efifb: probing for efifb Apr 17 23:36:41.968534 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 17 23:36:41.968548 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 17 23:36:41.968562 kernel: efifb: scrolling: redraw Apr 17 23:36:41.968583 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 17 23:36:41.968598 kernel: Console: switching to colour frame buffer device 100x37 Apr 17 23:36:41.968612 kernel: fb0: EFI VGA frame buffer device Apr 17 23:36:41.968625 kernel: pstore: Using crash dump compression: deflate Apr 17 23:36:41.968638 kernel: pstore: Registered efi_pstore as persistent store backend Apr 17 23:36:41.968656 kernel: NET: Registered PF_INET6 protocol family Apr 17 23:36:41.968671 kernel: Segment Routing with IPv6 Apr 17 23:36:41.968686 kernel: In-situ OAM (IOAM) with IPv6 Apr 17 23:36:41.968701 kernel: NET: Registered PF_PACKET protocol family Apr 17 23:36:41.968718 kernel: Key type dns_resolver registered Apr 17 23:36:41.968734 kernel: IPI shorthand broadcast: enabled Apr 17 23:36:41.968776 kernel: sched_clock: Marking stable (487002014, 152419363)->(729433674, -90012297) Apr 17 23:36:41.968796 kernel: registered taskstats version 1 Apr 17 23:36:41.968813 kernel: Loading compiled-in X.509 certificates Apr 17 23:36:41.968833 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 39e9969c7f49062f0fc1d1fb72e8f874436eb94f' Apr 17 23:36:41.968850 kernel: Key type .fscrypt registered Apr 17 23:36:41.968866 kernel: Key type fscrypt-provisioning registered Apr 17 23:36:41.968883 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 17 23:36:41.968899 kernel: ima: Allocated hash algorithm: sha1 Apr 17 23:36:41.968930 kernel: ima: No architecture policies found Apr 17 23:36:41.968947 kernel: clk: Disabling unused clocks Apr 17 23:36:41.968964 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 17 23:36:41.968980 kernel: Write protecting the kernel read-only data: 36864k Apr 17 23:36:41.969001 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 17 23:36:41.969018 kernel: Run /init as init process Apr 17 23:36:41.969035 kernel: with arguments: Apr 17 23:36:41.969052 kernel: /init Apr 17 23:36:41.969068 kernel: with environment: Apr 17 23:36:41.969084 kernel: HOME=/ Apr 17 23:36:41.969101 kernel: TERM=linux Apr 17 23:36:41.969122 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:36:41.969145 systemd[1]: Detected virtualization amazon. Apr 17 23:36:41.969165 systemd[1]: Detected architecture x86-64. Apr 17 23:36:41.969182 systemd[1]: Running in initrd. Apr 17 23:36:41.969199 systemd[1]: No hostname configured, using default hostname. Apr 17 23:36:41.969216 systemd[1]: Hostname set to . Apr 17 23:36:41.969232 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:36:41.969247 systemd[1]: Queued start job for default target initrd.target. Apr 17 23:36:41.969274 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:36:41.969293 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:36:41.969310 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 17 23:36:41.969326 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:36:41.969341 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 17 23:36:41.969360 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 17 23:36:41.969382 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 17 23:36:41.969401 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 17 23:36:41.969418 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:36:41.969435 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:36:41.969450 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:36:41.969465 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:36:41.969482 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:36:41.969503 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:36:41.969518 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:36:41.969533 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:36:41.969550 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 17 23:36:41.969565 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 17 23:36:41.969582 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:36:41.969599 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:36:41.969617 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:36:41.969634 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:36:41.969655 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 17 23:36:41.969672 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:36:41.969689 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 17 23:36:41.969706 systemd[1]: Starting systemd-fsck-usr.service... Apr 17 23:36:41.969724 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:36:41.969752 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:36:41.969799 systemd-journald[179]: Collecting audit messages is disabled. Apr 17 23:36:41.969840 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:36:41.969858 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 17 23:36:41.969875 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:36:41.969892 systemd[1]: Finished systemd-fsck-usr.service. Apr 17 23:36:41.969931 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:36:41.969952 systemd-journald[179]: Journal started Apr 17 23:36:41.969988 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2b00eff06bae1316cab9153b415ee4) is 4.7M, max 38.2M, 33.4M free. Apr 17 23:36:41.977946 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:36:41.977376 systemd-modules-load[180]: Inserted module 'overlay' Apr 17 23:36:41.992188 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:36:42.002003 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:36:42.003614 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:36:42.013943 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 17 23:36:42.015759 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:36:42.022123 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:36:42.032849 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:36:42.036834 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 17 23:36:42.035368 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:36:42.042146 kernel: Bridge firewalling registered Apr 17 23:36:42.041068 systemd-modules-load[180]: Inserted module 'br_netfilter' Apr 17 23:36:42.045230 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:36:42.054153 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 17 23:36:42.057715 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:36:42.061395 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:36:42.075732 dracut-cmdline[208]: dracut-dracut-053 Apr 17 23:36:42.079482 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:36:42.081049 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=e69cfa144bf8cf6f0b7e7881c91c17228ba9dbcb6c99d9692bced9ddba34ee3a Apr 17 23:36:42.090140 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:36:42.135871 systemd-resolved[229]: Positive Trust Anchors: Apr 17 23:36:42.136840 systemd-resolved[229]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:36:42.136906 systemd-resolved[229]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:36:42.144252 systemd-resolved[229]: Defaulting to hostname 'linux'. Apr 17 23:36:42.148271 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:36:42.148962 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:36:42.174948 kernel: SCSI subsystem initialized Apr 17 23:36:42.183943 kernel: Loading iSCSI transport class v2.0-870. Apr 17 23:36:42.195952 kernel: iscsi: registered transport (tcp) Apr 17 23:36:42.217390 kernel: iscsi: registered transport (qla4xxx) Apr 17 23:36:42.217475 kernel: QLogic iSCSI HBA Driver Apr 17 23:36:42.255774 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 17 23:36:42.261110 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 17 23:36:42.287058 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 17 23:36:42.287136 kernel: device-mapper: uevent: version 1.0.3 Apr 17 23:36:42.289945 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 17 23:36:42.330962 kernel: raid6: avx512x4 gen() 18049 MB/s Apr 17 23:36:42.348952 kernel: raid6: avx512x2 gen() 17920 MB/s Apr 17 23:36:42.366948 kernel: raid6: avx512x1 gen() 17504 MB/s Apr 17 23:36:42.384944 kernel: raid6: avx2x4 gen() 17790 MB/s Apr 17 23:36:42.402945 kernel: raid6: avx2x2 gen() 17765 MB/s Apr 17 23:36:42.421325 kernel: raid6: avx2x1 gen() 13762 MB/s Apr 17 23:36:42.421364 kernel: raid6: using algorithm avx512x4 gen() 18049 MB/s Apr 17 23:36:42.440263 kernel: raid6: .... xor() 7431 MB/s, rmw enabled Apr 17 23:36:42.440317 kernel: raid6: using avx512x2 recovery algorithm Apr 17 23:36:42.461949 kernel: xor: automatically using best checksumming function avx Apr 17 23:36:42.620948 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 17 23:36:42.631190 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:36:42.637154 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:36:42.652260 systemd-udevd[397]: Using default interface naming scheme 'v255'. Apr 17 23:36:42.657351 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:36:42.665194 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 17 23:36:42.684544 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Apr 17 23:36:42.715665 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:36:42.720159 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:36:42.773821 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:36:42.783780 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 17 23:36:42.808478 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 17 23:36:42.812643 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:36:42.814755 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:36:42.815339 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:36:42.822127 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 17 23:36:42.847230 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:36:42.879944 kernel: cryptd: max_cpu_qlen set to 1000 Apr 17 23:36:42.888759 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:36:42.888843 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:36:42.892809 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:36:42.893390 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:36:42.893475 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:36:42.894060 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:36:42.911804 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 17 23:36:42.912086 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 17 23:36:42.914786 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:36:42.921940 kernel: AVX2 version of gcm_enc/dec engaged. Apr 17 23:36:42.922000 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 17 23:36:42.925008 kernel: AES CTR mode by8 optimization enabled Apr 17 23:36:42.929589 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:29:88:8a:86:df Apr 17 23:36:42.927997 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:36:42.928152 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:36:42.930852 (udev-worker)[450]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:36:42.942714 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:36:42.958947 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 17 23:36:42.962072 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 17 23:36:42.971906 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:36:42.974327 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 17 23:36:42.984378 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 17 23:36:42.984452 kernel: GPT:9289727 != 33554431 Apr 17 23:36:42.984474 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 17 23:36:42.984496 kernel: GPT:9289727 != 33554431 Apr 17 23:36:42.984525 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 17 23:36:42.984545 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:36:42.984009 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 17 23:36:43.018181 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:36:43.076952 kernel: BTRFS: device fsid 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 devid 1 transid 32 /dev/nvme0n1p3 scanned by (udev-worker) (443) Apr 17 23:36:43.084973 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (448) Apr 17 23:36:43.120680 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 17 23:36:43.160623 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 17 23:36:43.175661 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 17 23:36:43.176242 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 17 23:36:43.183531 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 23:36:43.189106 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 17 23:36:43.197666 disk-uuid[633]: Primary Header is updated. Apr 17 23:36:43.197666 disk-uuid[633]: Secondary Entries is updated. Apr 17 23:36:43.197666 disk-uuid[633]: Secondary Header is updated. Apr 17 23:36:43.205942 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:36:43.212334 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:36:44.222974 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 17 23:36:44.223670 disk-uuid[634]: The operation has completed successfully. Apr 17 23:36:44.361859 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 17 23:36:44.362529 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 17 23:36:44.385142 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 17 23:36:44.389843 sh[977]: Success Apr 17 23:36:44.410940 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 17 23:36:44.520552 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 17 23:36:44.530065 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 17 23:36:44.532203 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 17 23:36:44.566080 kernel: BTRFS info (device dm-0): first mount of filesystem 81b0bf8a-1550-4880-b72f-76fa51dbb6c0 Apr 17 23:36:44.566171 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:36:44.566194 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 17 23:36:44.570060 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 17 23:36:44.570143 kernel: BTRFS info (device dm-0): using free space tree Apr 17 23:36:44.661946 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 17 23:36:44.709876 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 17 23:36:44.711167 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 17 23:36:44.716134 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 17 23:36:44.719110 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 17 23:36:44.749640 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:36:44.749734 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:36:44.749757 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:36:44.766947 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:36:44.779942 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 17 23:36:44.783933 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:36:44.791213 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 17 23:36:44.799212 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 17 23:36:44.824353 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:36:44.830166 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:36:44.858344 systemd-networkd[1169]: lo: Link UP Apr 17 23:36:44.858356 systemd-networkd[1169]: lo: Gained carrier Apr 17 23:36:44.860206 systemd-networkd[1169]: Enumeration completed Apr 17 23:36:44.860667 systemd-networkd[1169]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:36:44.860672 systemd-networkd[1169]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:36:44.861847 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:36:44.863369 systemd[1]: Reached target network.target - Network. Apr 17 23:36:44.865000 systemd-networkd[1169]: eth0: Link UP Apr 17 23:36:44.865006 systemd-networkd[1169]: eth0: Gained carrier Apr 17 23:36:44.865020 systemd-networkd[1169]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:36:44.885024 systemd-networkd[1169]: eth0: DHCPv4 address 172.31.16.109/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 23:36:45.159582 ignition[1133]: Ignition 2.19.0 Apr 17 23:36:45.159596 ignition[1133]: Stage: fetch-offline Apr 17 23:36:45.159877 ignition[1133]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:36:45.159890 ignition[1133]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:36:45.161799 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:36:45.160405 ignition[1133]: Ignition finished successfully Apr 17 23:36:45.178226 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 17 23:36:45.193033 ignition[1177]: Ignition 2.19.0 Apr 17 23:36:45.193047 ignition[1177]: Stage: fetch Apr 17 23:36:45.193498 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:36:45.193513 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:36:45.193643 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:36:45.222149 ignition[1177]: PUT result: OK Apr 17 23:36:45.231750 ignition[1177]: parsed url from cmdline: "" Apr 17 23:36:45.231773 ignition[1177]: no config URL provided Apr 17 23:36:45.231787 ignition[1177]: reading system config file "/usr/lib/ignition/user.ign" Apr 17 23:36:45.231805 ignition[1177]: no config at "/usr/lib/ignition/user.ign" Apr 17 23:36:45.231830 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:36:45.233096 ignition[1177]: PUT result: OK Apr 17 23:36:45.233163 ignition[1177]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 17 23:36:45.235622 ignition[1177]: GET result: OK Apr 17 23:36:45.235760 ignition[1177]: parsing config with SHA512: 69840b519ee128cd5d694d9201ee0da6fcf83012b9f9da2a3e54e02d6a6642d3427742fcf8b8b87524e7fbc79582126a32e3208475ef84b09f816ddc7dacff67 Apr 17 23:36:45.240376 unknown[1177]: fetched base config from "system" Apr 17 23:36:45.240559 unknown[1177]: fetched base config from "system" Apr 17 23:36:45.242125 ignition[1177]: fetch: fetch complete Apr 17 23:36:45.241279 unknown[1177]: fetched user config from "aws" Apr 17 23:36:45.242133 ignition[1177]: fetch: fetch passed Apr 17 23:36:45.242199 ignition[1177]: Ignition finished successfully Apr 17 23:36:45.245443 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 17 23:36:45.251179 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 17 23:36:45.266620 ignition[1183]: Ignition 2.19.0 Apr 17 23:36:45.266640 ignition[1183]: Stage: kargs Apr 17 23:36:45.267121 ignition[1183]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:36:45.267136 ignition[1183]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:36:45.267260 ignition[1183]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:36:45.269278 ignition[1183]: PUT result: OK Apr 17 23:36:45.278518 ignition[1183]: kargs: kargs passed Apr 17 23:36:45.278597 ignition[1183]: Ignition finished successfully Apr 17 23:36:45.280581 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 17 23:36:45.285163 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 17 23:36:45.300904 ignition[1189]: Ignition 2.19.0 Apr 17 23:36:45.300928 ignition[1189]: Stage: disks Apr 17 23:36:45.301387 ignition[1189]: no configs at "/usr/lib/ignition/base.d" Apr 17 23:36:45.301400 ignition[1189]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:36:45.301526 ignition[1189]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:36:45.303091 ignition[1189]: PUT result: OK Apr 17 23:36:45.305952 ignition[1189]: disks: disks passed Apr 17 23:36:45.306022 ignition[1189]: Ignition finished successfully Apr 17 23:36:45.307757 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 17 23:36:45.308372 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 17 23:36:45.308710 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 17 23:36:45.309255 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:36:45.309845 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:36:45.310392 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:36:45.315092 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 17 23:36:45.348189 systemd-fsck[1197]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 17 23:36:45.351975 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 17 23:36:45.358051 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 17 23:36:45.461947 kernel: EXT4-fs (nvme0n1p9): mounted filesystem d3c199f8-8065-4f33-a75b-da2f09d4fc39 r/w with ordered data mode. Quota mode: none. Apr 17 23:36:45.462572 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 17 23:36:45.463641 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 17 23:36:45.476059 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:36:45.479038 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 17 23:36:45.480880 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 17 23:36:45.482032 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 17 23:36:45.482066 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:36:45.495353 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 17 23:36:45.497756 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1216) Apr 17 23:36:45.498953 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:36:45.498997 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:36:45.499026 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:36:45.509090 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 17 23:36:45.510779 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:36:45.513611 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:36:45.890279 initrd-setup-root[1240]: cut: /sysroot/etc/passwd: No such file or directory Apr 17 23:36:45.907187 initrd-setup-root[1247]: cut: /sysroot/etc/group: No such file or directory Apr 17 23:36:45.912656 initrd-setup-root[1254]: cut: /sysroot/etc/shadow: No such file or directory Apr 17 23:36:45.917241 initrd-setup-root[1261]: cut: /sysroot/etc/gshadow: No such file or directory Apr 17 23:36:46.156059 systemd-networkd[1169]: eth0: Gained IPv6LL Apr 17 23:36:46.220488 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 17 23:36:46.229069 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 17 23:36:46.234186 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 17 23:36:46.241616 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 17 23:36:46.245249 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:36:46.273570 ignition[1328]: INFO : Ignition 2.19.0 Apr 17 23:36:46.275805 ignition[1328]: INFO : Stage: mount Apr 17 23:36:46.275805 ignition[1328]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:36:46.275805 ignition[1328]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:36:46.275805 ignition[1328]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:36:46.279187 ignition[1328]: INFO : PUT result: OK Apr 17 23:36:46.281529 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 17 23:36:46.284216 ignition[1328]: INFO : mount: mount passed Apr 17 23:36:46.285429 ignition[1328]: INFO : Ignition finished successfully Apr 17 23:36:46.285439 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 17 23:36:46.291033 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 17 23:36:46.303153 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 17 23:36:46.323943 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1343) Apr 17 23:36:46.328750 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a5a0fe13-59ac-4c21-ab23-7fd1bfa02f60 Apr 17 23:36:46.328826 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 17 23:36:46.328848 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 17 23:36:46.335942 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 17 23:36:46.338605 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 17 23:36:46.360844 ignition[1360]: INFO : Ignition 2.19.0 Apr 17 23:36:46.360844 ignition[1360]: INFO : Stage: files Apr 17 23:36:46.362859 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:36:46.362859 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:36:46.362859 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:36:46.364071 ignition[1360]: INFO : PUT result: OK Apr 17 23:36:46.367074 ignition[1360]: DEBUG : files: compiled without relabeling support, skipping Apr 17 23:36:46.367961 ignition[1360]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 17 23:36:46.367961 ignition[1360]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 17 23:36:46.404517 ignition[1360]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 17 23:36:46.405809 ignition[1360]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 17 23:36:46.405809 ignition[1360]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 17 23:36:46.405798 unknown[1360]: wrote ssh authorized keys file for user: core Apr 17 23:36:46.408352 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:36:46.409253 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 17 23:36:46.501099 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 17 23:36:46.677557 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 17 23:36:46.677557 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 17 23:36:46.679793 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 17 23:36:46.679793 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:36:46.679793 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 17 23:36:46.679793 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:36:46.679793 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 17 23:36:46.679793 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:36:46.679793 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 17 23:36:46.679793 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:36:46.679793 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 17 23:36:46.679793 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:36:46.679793 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:36:46.679793 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:36:46.679793 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 17 23:36:46.968407 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 17 23:36:47.430541 ignition[1360]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 17 23:36:47.430541 ignition[1360]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 17 23:36:47.433537 ignition[1360]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:36:47.435356 ignition[1360]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 17 23:36:47.435356 ignition[1360]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 17 23:36:47.435356 ignition[1360]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 17 23:36:47.435356 ignition[1360]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 17 23:36:47.435356 ignition[1360]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:36:47.435356 ignition[1360]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 17 23:36:47.435356 ignition[1360]: INFO : files: files passed Apr 17 23:36:47.435356 ignition[1360]: INFO : Ignition finished successfully Apr 17 23:36:47.436219 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 17 23:36:47.445117 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 17 23:36:47.448222 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 17 23:36:47.452617 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 17 23:36:47.453554 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 17 23:36:47.470246 initrd-setup-root-after-ignition[1388]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:36:47.470246 initrd-setup-root-after-ignition[1388]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:36:47.473808 initrd-setup-root-after-ignition[1392]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 17 23:36:47.475819 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:36:47.476519 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 17 23:36:47.480146 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 17 23:36:47.506450 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 17 23:36:47.506582 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 17 23:36:47.508169 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 17 23:36:47.509040 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 17 23:36:47.509814 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 17 23:36:47.517116 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 17 23:36:47.530157 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:36:47.536119 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 17 23:36:47.548716 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:36:47.549416 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:36:47.550467 systemd[1]: Stopped target timers.target - Timer Units. Apr 17 23:36:47.551305 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 17 23:36:47.551480 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 17 23:36:47.552647 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 17 23:36:47.553495 systemd[1]: Stopped target basic.target - Basic System. Apr 17 23:36:47.554379 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 17 23:36:47.555153 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 17 23:36:47.555905 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 17 23:36:47.556682 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 17 23:36:47.557453 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 17 23:36:47.558313 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 17 23:36:47.559466 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 17 23:36:47.560220 systemd[1]: Stopped target swap.target - Swaps. Apr 17 23:36:47.560940 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 17 23:36:47.561122 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 17 23:36:47.562274 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:36:47.563069 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:36:47.563747 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 17 23:36:47.564483 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:36:47.564960 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 17 23:36:47.565129 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 17 23:36:47.566595 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 17 23:36:47.566773 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 17 23:36:47.567477 systemd[1]: ignition-files.service: Deactivated successfully. Apr 17 23:36:47.567626 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 17 23:36:47.575178 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 17 23:36:47.579488 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 17 23:36:47.580921 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 17 23:36:47.581131 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:36:47.585301 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 17 23:36:47.586178 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 17 23:36:47.592873 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 17 23:36:47.593754 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 17 23:36:47.597653 ignition[1412]: INFO : Ignition 2.19.0 Apr 17 23:36:47.597653 ignition[1412]: INFO : Stage: umount Apr 17 23:36:47.601015 ignition[1412]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 17 23:36:47.601015 ignition[1412]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 17 23:36:47.601015 ignition[1412]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 17 23:36:47.601015 ignition[1412]: INFO : PUT result: OK Apr 17 23:36:47.605742 ignition[1412]: INFO : umount: umount passed Apr 17 23:36:47.605742 ignition[1412]: INFO : Ignition finished successfully Apr 17 23:36:47.608481 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 17 23:36:47.609258 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 17 23:36:47.610647 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 17 23:36:47.611054 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 17 23:36:47.612336 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 17 23:36:47.612394 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 17 23:36:47.612886 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 17 23:36:47.612991 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 17 23:36:47.613459 systemd[1]: Stopped target network.target - Network. Apr 17 23:36:47.613887 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 17 23:36:47.615898 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 17 23:36:47.616889 systemd[1]: Stopped target paths.target - Path Units. Apr 17 23:36:47.617836 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 17 23:36:47.617906 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:36:47.618845 systemd[1]: Stopped target slices.target - Slice Units. Apr 17 23:36:47.619802 systemd[1]: Stopped target sockets.target - Socket Units. Apr 17 23:36:47.620744 systemd[1]: iscsid.socket: Deactivated successfully. Apr 17 23:36:47.620800 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 17 23:36:47.621272 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 17 23:36:47.621310 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 17 23:36:47.622014 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 17 23:36:47.622079 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 17 23:36:47.622569 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 17 23:36:47.622625 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 17 23:36:47.623390 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 17 23:36:47.624027 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 17 23:36:47.626270 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 17 23:36:47.627964 systemd-networkd[1169]: eth0: DHCPv6 lease lost Apr 17 23:36:47.631388 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 17 23:36:47.631530 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 17 23:36:47.632579 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 17 23:36:47.632710 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 17 23:36:47.636257 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 17 23:36:47.636323 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:36:47.644149 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 17 23:36:47.644730 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 17 23:36:47.644813 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 17 23:36:47.645499 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 17 23:36:47.645560 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:36:47.646201 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 17 23:36:47.646257 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 17 23:36:47.646896 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 17 23:36:47.646971 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:36:47.647667 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:36:47.662230 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 17 23:36:47.662384 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 17 23:36:47.663626 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 17 23:36:47.663807 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:36:47.665948 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 17 23:36:47.666044 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 17 23:36:47.667051 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 17 23:36:47.667086 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:36:47.667520 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 17 23:36:47.667582 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 17 23:36:47.668725 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 17 23:36:47.668785 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 17 23:36:47.669904 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 17 23:36:47.669990 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 17 23:36:47.677160 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 17 23:36:47.678666 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 17 23:36:47.679394 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:36:47.681074 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 17 23:36:47.681136 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:36:47.681945 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 17 23:36:47.682004 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:36:47.682575 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 17 23:36:47.682632 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:36:47.686668 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 17 23:36:47.686790 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 17 23:36:47.761658 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 17 23:36:47.761959 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 17 23:36:47.763524 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 17 23:36:47.764087 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 17 23:36:47.764201 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 17 23:36:47.770139 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 17 23:36:47.780077 systemd[1]: Switching root. Apr 17 23:36:47.813669 systemd-journald[179]: Journal stopped Apr 17 23:36:49.530252 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Apr 17 23:36:49.530358 kernel: SELinux: policy capability network_peer_controls=1 Apr 17 23:36:49.530389 kernel: SELinux: policy capability open_perms=1 Apr 17 23:36:49.530415 kernel: SELinux: policy capability extended_socket_class=1 Apr 17 23:36:49.530447 kernel: SELinux: policy capability always_check_network=0 Apr 17 23:36:49.530468 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 17 23:36:49.530489 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 17 23:36:49.530510 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 17 23:36:49.530530 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 17 23:36:49.530551 kernel: audit: type=1403 audit(1776469008.203:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 17 23:36:49.530574 systemd[1]: Successfully loaded SELinux policy in 51.522ms. Apr 17 23:36:49.530597 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.895ms. Apr 17 23:36:49.530622 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 17 23:36:49.530648 systemd[1]: Detected virtualization amazon. Apr 17 23:36:49.530670 systemd[1]: Detected architecture x86-64. Apr 17 23:36:49.530691 systemd[1]: Detected first boot. Apr 17 23:36:49.530713 systemd[1]: Initializing machine ID from VM UUID. Apr 17 23:36:49.530735 zram_generator::config[1454]: No configuration found. Apr 17 23:36:49.530757 systemd[1]: Populated /etc with preset unit settings. Apr 17 23:36:49.530779 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 17 23:36:49.530800 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 17 23:36:49.530826 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 17 23:36:49.530849 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 17 23:36:49.530872 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 17 23:36:49.530895 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 17 23:36:49.530961 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 17 23:36:49.530984 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 17 23:36:49.531006 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 17 23:36:49.531027 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 17 23:36:49.531048 systemd[1]: Created slice user.slice - User and Session Slice. Apr 17 23:36:49.531076 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 17 23:36:49.531098 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 17 23:36:49.531120 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 17 23:36:49.531143 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 17 23:36:49.531164 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 17 23:36:49.531187 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 17 23:36:49.531208 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 17 23:36:49.531230 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 17 23:36:49.531252 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 17 23:36:49.531277 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 17 23:36:49.531296 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 17 23:36:49.531317 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 17 23:36:49.531337 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 17 23:36:49.531362 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 17 23:36:49.531381 systemd[1]: Reached target slices.target - Slice Units. Apr 17 23:36:49.531400 systemd[1]: Reached target swap.target - Swaps. Apr 17 23:36:49.531419 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 17 23:36:49.531441 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 17 23:36:49.531462 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 17 23:36:49.531482 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 17 23:36:49.531501 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 17 23:36:49.531520 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 17 23:36:49.531539 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 17 23:36:49.531558 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 17 23:36:49.531578 systemd[1]: Mounting media.mount - External Media Directory... Apr 17 23:36:49.531598 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:36:49.531620 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 17 23:36:49.531640 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 17 23:36:49.531659 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 17 23:36:49.531678 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 17 23:36:49.531698 systemd[1]: Reached target machines.target - Containers. Apr 17 23:36:49.531717 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 17 23:36:49.531737 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:36:49.531756 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 17 23:36:49.531777 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 17 23:36:49.531798 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:36:49.531817 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:36:49.531836 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:36:49.531855 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 17 23:36:49.531875 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:36:49.531895 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 17 23:36:49.531928 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 17 23:36:49.531951 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 17 23:36:49.531969 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 17 23:36:49.531988 systemd[1]: Stopped systemd-fsck-usr.service. Apr 17 23:36:49.532007 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 17 23:36:49.532027 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 17 23:36:49.532046 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 17 23:36:49.532067 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 17 23:36:49.532086 kernel: loop: module loaded Apr 17 23:36:49.532106 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 17 23:36:49.532125 systemd[1]: verity-setup.service: Deactivated successfully. Apr 17 23:36:49.532148 systemd[1]: Stopped verity-setup.service. Apr 17 23:36:49.532168 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:36:49.532187 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 17 23:36:49.532207 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 17 23:36:49.532226 systemd[1]: Mounted media.mount - External Media Directory. Apr 17 23:36:49.532248 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 17 23:36:49.532269 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 17 23:36:49.532288 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 17 23:36:49.532307 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 17 23:36:49.532326 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 17 23:36:49.532345 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 17 23:36:49.532364 kernel: ACPI: bus type drm_connector registered Apr 17 23:36:49.532382 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 17 23:36:49.532405 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:36:49.532425 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:36:49.532449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:36:49.532468 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:36:49.532487 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:36:49.532509 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:36:49.532528 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:36:49.532548 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:36:49.532565 kernel: fuse: init (API version 7.39) Apr 17 23:36:49.532583 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 17 23:36:49.532637 systemd-journald[1546]: Collecting audit messages is disabled. Apr 17 23:36:49.532674 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 17 23:36:49.532695 systemd-journald[1546]: Journal started Apr 17 23:36:49.532745 systemd-journald[1546]: Runtime Journal (/run/log/journal/ec2b00eff06bae1316cab9153b415ee4) is 4.7M, max 38.2M, 33.4M free. Apr 17 23:36:49.092323 systemd[1]: Queued start job for default target multi-user.target. Apr 17 23:36:49.142801 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 17 23:36:49.143245 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 17 23:36:49.539780 systemd[1]: Started systemd-journald.service - Journal Service. Apr 17 23:36:49.539565 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 17 23:36:49.540054 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 17 23:36:49.541377 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 17 23:36:49.560179 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 17 23:36:49.570509 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 17 23:36:49.581150 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 17 23:36:49.581854 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 17 23:36:49.581911 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 17 23:36:49.585608 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 17 23:36:49.595175 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 17 23:36:49.608111 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 17 23:36:49.609511 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:36:49.614181 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 17 23:36:49.620103 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 17 23:36:49.620827 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:36:49.632189 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 17 23:36:49.633483 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:36:49.637611 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 17 23:36:49.649177 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 17 23:36:49.652942 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 17 23:36:49.657570 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 17 23:36:49.659556 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 17 23:36:49.664145 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 17 23:36:49.665148 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 17 23:36:49.666760 systemd-journald[1546]: Time spent on flushing to /var/log/journal/ec2b00eff06bae1316cab9153b415ee4 is 80.245ms for 984 entries. Apr 17 23:36:49.666760 systemd-journald[1546]: System Journal (/var/log/journal/ec2b00eff06bae1316cab9153b415ee4) is 8.0M, max 195.6M, 187.6M free. Apr 17 23:36:49.779185 systemd-journald[1546]: Received client request to flush runtime journal. Apr 17 23:36:49.779270 kernel: loop0: detected capacity change from 0 to 228704 Apr 17 23:36:49.671531 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 17 23:36:49.681630 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 17 23:36:49.692149 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 17 23:36:49.701173 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 17 23:36:49.724540 udevadm[1593]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 17 23:36:49.768017 systemd-tmpfiles[1584]: ACLs are not supported, ignoring. Apr 17 23:36:49.768042 systemd-tmpfiles[1584]: ACLs are not supported, ignoring. Apr 17 23:36:49.787151 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 17 23:36:49.790701 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 17 23:36:49.795454 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 17 23:36:49.796482 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 17 23:36:49.806407 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 17 23:36:49.818094 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 17 23:36:49.882686 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 17 23:36:49.896188 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 17 23:36:49.938798 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Apr 17 23:36:49.939286 systemd-tmpfiles[1604]: ACLs are not supported, ignoring. Apr 17 23:36:49.948807 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 17 23:36:49.986335 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 17 23:36:50.031963 kernel: loop1: detected capacity change from 0 to 140768 Apr 17 23:36:50.177160 kernel: loop2: detected capacity change from 0 to 142488 Apr 17 23:36:50.295937 kernel: loop3: detected capacity change from 0 to 61336 Apr 17 23:36:50.410989 kernel: loop4: detected capacity change from 0 to 228704 Apr 17 23:36:50.446943 kernel: loop5: detected capacity change from 0 to 140768 Apr 17 23:36:50.479954 kernel: loop6: detected capacity change from 0 to 142488 Apr 17 23:36:50.506945 kernel: loop7: detected capacity change from 0 to 61336 Apr 17 23:36:50.530906 (sd-merge)[1612]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 17 23:36:50.536801 (sd-merge)[1612]: Merged extensions into '/usr'. Apr 17 23:36:50.545794 systemd[1]: Reloading requested from client PID 1583 ('systemd-sysext') (unit systemd-sysext.service)... Apr 17 23:36:50.546157 systemd[1]: Reloading... Apr 17 23:36:50.627861 zram_generator::config[1640]: No configuration found. Apr 17 23:36:50.783788 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:36:50.846602 systemd[1]: Reloading finished in 299 ms. Apr 17 23:36:50.878524 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 17 23:36:50.879353 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 17 23:36:50.888233 systemd[1]: Starting ensure-sysext.service... Apr 17 23:36:50.892135 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 17 23:36:50.897181 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 17 23:36:50.923197 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 17 23:36:50.925056 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 17 23:36:50.926601 systemd-tmpfiles[1691]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 17 23:36:50.927222 systemd-tmpfiles[1691]: ACLs are not supported, ignoring. Apr 17 23:36:50.927408 systemd-tmpfiles[1691]: ACLs are not supported, ignoring. Apr 17 23:36:50.931744 systemd-tmpfiles[1691]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:36:50.931891 systemd-tmpfiles[1691]: Skipping /boot Apr 17 23:36:50.932104 systemd[1]: Reloading requested from client PID 1690 ('systemctl') (unit ensure-sysext.service)... Apr 17 23:36:50.932125 systemd[1]: Reloading... Apr 17 23:36:50.953482 systemd-tmpfiles[1691]: Detected autofs mount point /boot during canonicalization of boot. Apr 17 23:36:50.957579 systemd-tmpfiles[1691]: Skipping /boot Apr 17 23:36:50.970829 systemd-udevd[1692]: Using default interface naming scheme 'v255'. Apr 17 23:36:51.061993 zram_generator::config[1720]: No configuration found. Apr 17 23:36:51.183644 (udev-worker)[1737]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:36:51.288959 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 17 23:36:51.295072 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 17 23:36:51.305194 kernel: ACPI: button: Power Button [PWRF] Apr 17 23:36:51.305297 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Apr 17 23:36:51.307712 kernel: ACPI: button: Sleep Button [SLPF] Apr 17 23:36:51.361872 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:36:51.368966 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Apr 17 23:36:51.481963 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (1735) Apr 17 23:36:51.562223 ldconfig[1578]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 17 23:36:51.562816 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 17 23:36:51.562995 systemd[1]: Reloading finished in 630 ms. Apr 17 23:36:51.591938 kernel: mousedev: PS/2 mouse device common for all mice Apr 17 23:36:51.595050 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 17 23:36:51.596626 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 17 23:36:51.599871 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 17 23:36:51.658218 systemd[1]: Finished ensure-sysext.service. Apr 17 23:36:51.659083 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 17 23:36:51.672258 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 17 23:36:51.678997 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:36:51.684152 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:36:51.690131 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 17 23:36:51.690999 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 17 23:36:51.694124 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 17 23:36:51.703131 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 17 23:36:51.706245 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 17 23:36:51.721203 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 17 23:36:51.725479 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 17 23:36:51.726831 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 17 23:36:51.742318 lvm[1888]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:36:51.730720 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 17 23:36:51.741304 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 17 23:36:51.751126 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 17 23:36:51.765255 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 17 23:36:51.765963 systemd[1]: Reached target time-set.target - System Time Set. Apr 17 23:36:51.770979 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 17 23:36:51.775082 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 17 23:36:51.775735 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 17 23:36:51.778118 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 17 23:36:51.779194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 17 23:36:51.779394 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 17 23:36:51.781159 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 17 23:36:51.781353 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 17 23:36:51.782799 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 17 23:36:51.783713 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 17 23:36:51.785496 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 17 23:36:51.785705 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 17 23:36:51.798310 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 17 23:36:51.807215 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 17 23:36:51.807867 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 17 23:36:51.807969 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 17 23:36:51.814098 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 17 23:36:51.849364 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 17 23:36:51.854069 lvm[1914]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 17 23:36:51.859943 augenrules[1922]: No rules Apr 17 23:36:51.864653 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:36:51.876109 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 17 23:36:51.894719 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 17 23:36:51.905361 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 17 23:36:51.916333 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 17 23:36:51.927455 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 17 23:36:51.928389 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 17 23:36:51.930960 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 17 23:36:51.938004 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 17 23:36:51.997537 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 17 23:36:52.032519 systemd-networkd[1902]: lo: Link UP Apr 17 23:36:52.032897 systemd-networkd[1902]: lo: Gained carrier Apr 17 23:36:52.034965 systemd-networkd[1902]: Enumeration completed Apr 17 23:36:52.035268 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 17 23:36:52.037467 systemd-networkd[1902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:36:52.037473 systemd-networkd[1902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 17 23:36:52.040228 systemd-resolved[1904]: Positive Trust Anchors: Apr 17 23:36:52.040424 systemd-resolved[1904]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 17 23:36:52.040484 systemd-resolved[1904]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 17 23:36:52.042127 systemd-networkd[1902]: eth0: Link UP Apr 17 23:36:52.042320 systemd-networkd[1902]: eth0: Gained carrier Apr 17 23:36:52.042346 systemd-networkd[1902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 17 23:36:52.045046 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 17 23:36:52.050811 systemd-resolved[1904]: Defaulting to hostname 'linux'. Apr 17 23:36:52.052084 systemd-networkd[1902]: eth0: DHCPv4 address 172.31.16.109/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 17 23:36:52.053485 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 17 23:36:52.054257 systemd[1]: Reached target network.target - Network. Apr 17 23:36:52.055065 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 17 23:36:52.055705 systemd[1]: Reached target sysinit.target - System Initialization. Apr 17 23:36:52.056273 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 17 23:36:52.056696 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 17 23:36:52.057285 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 17 23:36:52.057844 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 17 23:36:52.058265 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 17 23:36:52.058629 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 17 23:36:52.058673 systemd[1]: Reached target paths.target - Path Units. Apr 17 23:36:52.059060 systemd[1]: Reached target timers.target - Timer Units. Apr 17 23:36:52.059831 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 17 23:36:52.061766 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 17 23:36:52.070108 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 17 23:36:52.071376 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 17 23:36:52.071996 systemd[1]: Reached target sockets.target - Socket Units. Apr 17 23:36:52.072441 systemd[1]: Reached target basic.target - Basic System. Apr 17 23:36:52.072865 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:36:52.072979 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 17 23:36:52.074226 systemd[1]: Starting containerd.service - containerd container runtime... Apr 17 23:36:52.079104 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 17 23:36:52.083119 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 17 23:36:52.087076 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 17 23:36:52.090252 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 17 23:36:52.092022 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 17 23:36:52.102560 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 17 23:36:52.108157 systemd[1]: Started ntpd.service - Network Time Service. Apr 17 23:36:52.111073 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 17 23:36:52.118152 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 17 23:36:52.128159 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 17 23:36:52.131584 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 17 23:36:52.144040 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 17 23:36:52.146358 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 17 23:36:52.148190 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 17 23:36:52.150017 systemd[1]: Starting update-engine.service - Update Engine... Apr 17 23:36:52.159050 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 17 23:36:52.167995 jq[1951]: false Apr 17 23:36:52.185126 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 17 23:36:52.185362 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 17 23:36:52.216865 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 17 23:36:52.218723 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 17 23:36:52.237585 dbus-daemon[1950]: [system] SELinux support is enabled Apr 17 23:36:52.238478 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 17 23:36:52.260944 jq[1963]: true Apr 17 23:36:52.261541 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 17 23:36:52.261618 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 17 23:36:52.262306 dbus-daemon[1950]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1902 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 17 23:36:52.263773 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 17 23:36:52.265964 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:06 UTC 2026 (1): Starting Apr 17 23:36:52.265964 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:36:52.265964 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: ---------------------------------------------------- Apr 17 23:36:52.265964 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:36:52.265964 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:36:52.265964 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: corporation. Support and training for ntp-4 are Apr 17 23:36:52.265964 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: available at https://www.nwtime.org/support Apr 17 23:36:52.265964 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: ---------------------------------------------------- Apr 17 23:36:52.263912 ntpd[1954]: ntpd 4.2.8p17@1.4004-o Fri Apr 17 21:46:06 UTC 2026 (1): Starting Apr 17 23:36:52.263803 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 17 23:36:52.263952 ntpd[1954]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 17 23:36:52.263964 ntpd[1954]: ---------------------------------------------------- Apr 17 23:36:52.263974 ntpd[1954]: ntp-4 is maintained by Network Time Foundation, Apr 17 23:36:52.263985 ntpd[1954]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 17 23:36:52.263995 ntpd[1954]: corporation. Support and training for ntp-4 are Apr 17 23:36:52.264004 ntpd[1954]: available at https://www.nwtime.org/support Apr 17 23:36:52.264014 ntpd[1954]: ---------------------------------------------------- Apr 17 23:36:52.265109 dbus-daemon[1950]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 17 23:36:52.270719 ntpd[1954]: proto: precision = 0.079 usec (-24) Apr 17 23:36:52.278514 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: proto: precision = 0.079 usec (-24) Apr 17 23:36:52.278514 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: basedate set to 2026-04-05 Apr 17 23:36:52.278514 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: gps base set to 2026-04-05 (week 2413) Apr 17 23:36:52.275853 ntpd[1954]: basedate set to 2026-04-05 Apr 17 23:36:52.275874 ntpd[1954]: gps base set to 2026-04-05 (week 2413) Apr 17 23:36:52.281668 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 17 23:36:52.284257 ntpd[1954]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:36:52.284323 ntpd[1954]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:36:52.284413 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: Listen and drop on 0 v6wildcard [::]:123 Apr 17 23:36:52.284413 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 17 23:36:52.284512 ntpd[1954]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:36:52.284562 ntpd[1954]: Listen normally on 3 eth0 172.31.16.109:123 Apr 17 23:36:52.284633 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: Listen normally on 2 lo 127.0.0.1:123 Apr 17 23:36:52.284633 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: Listen normally on 3 eth0 172.31.16.109:123 Apr 17 23:36:52.284633 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: Listen normally on 4 lo [::1]:123 Apr 17 23:36:52.284606 ntpd[1954]: Listen normally on 4 lo [::1]:123 Apr 17 23:36:52.284796 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: bind(21) AF_INET6 fe80::429:88ff:fe8a:86df%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:36:52.284796 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: unable to create socket on eth0 (5) for fe80::429:88ff:fe8a:86df%2#123 Apr 17 23:36:52.284796 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: failed to init interface for address fe80::429:88ff:fe8a:86df%2 Apr 17 23:36:52.284796 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: Listening on routing socket on fd #21 for interface updates Apr 17 23:36:52.284653 ntpd[1954]: bind(21) AF_INET6 fe80::429:88ff:fe8a:86df%2#123 flags 0x11 failed: Cannot assign requested address Apr 17 23:36:52.284677 ntpd[1954]: unable to create socket on eth0 (5) for fe80::429:88ff:fe8a:86df%2#123 Apr 17 23:36:52.284693 ntpd[1954]: failed to init interface for address fe80::429:88ff:fe8a:86df%2 Apr 17 23:36:52.284726 ntpd[1954]: Listening on routing socket on fd #21 for interface updates Apr 17 23:36:52.286825 (ntainerd)[1982]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 17 23:36:52.311686 ntpd[1954]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:36:52.311732 ntpd[1954]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:36:52.311879 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:36:52.311879 ntpd[1954]: 17 Apr 23:36:52 ntpd[1954]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 17 23:36:52.317526 update_engine[1961]: I20260417 23:36:52.317350 1961 main.cc:92] Flatcar Update Engine starting Apr 17 23:36:52.330742 extend-filesystems[1952]: Found loop4 Apr 17 23:36:52.330742 extend-filesystems[1952]: Found loop5 Apr 17 23:36:52.330742 extend-filesystems[1952]: Found loop6 Apr 17 23:36:52.330742 extend-filesystems[1952]: Found loop7 Apr 17 23:36:52.330742 extend-filesystems[1952]: Found nvme0n1 Apr 17 23:36:52.330742 extend-filesystems[1952]: Found nvme0n1p1 Apr 17 23:36:52.330742 extend-filesystems[1952]: Found nvme0n1p2 Apr 17 23:36:52.330742 extend-filesystems[1952]: Found nvme0n1p3 Apr 17 23:36:52.330742 extend-filesystems[1952]: Found usr Apr 17 23:36:52.330742 extend-filesystems[1952]: Found nvme0n1p4 Apr 17 23:36:52.330742 extend-filesystems[1952]: Found nvme0n1p6 Apr 17 23:36:52.330742 extend-filesystems[1952]: Found nvme0n1p7 Apr 17 23:36:52.330742 extend-filesystems[1952]: Found nvme0n1p9 Apr 17 23:36:52.330742 extend-filesystems[1952]: Checking size of /dev/nvme0n1p9 Apr 17 23:36:52.331795 systemd[1]: Started update-engine.service - Update Engine. Apr 17 23:36:52.376759 tar[1972]: linux-amd64/LICENSE Apr 17 23:36:52.376759 tar[1972]: linux-amd64/helm Apr 17 23:36:52.380190 update_engine[1961]: I20260417 23:36:52.334139 1961 update_check_scheduler.cc:74] Next update check in 4m11s Apr 17 23:36:52.380238 jq[1983]: true Apr 17 23:36:52.343208 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 17 23:36:52.347803 systemd[1]: motdgen.service: Deactivated successfully. Apr 17 23:36:52.348077 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 17 23:36:52.407082 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 17 23:36:52.417089 extend-filesystems[1952]: Resized partition /dev/nvme0n1p9 Apr 17 23:36:52.430268 extend-filesystems[2003]: resize2fs 1.47.1 (20-May-2024) Apr 17 23:36:52.442818 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 17 23:36:52.441496 systemd-logind[1959]: Watching system buttons on /dev/input/event1 (Power Button) Apr 17 23:36:52.441518 systemd-logind[1959]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 17 23:36:52.441541 systemd-logind[1959]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 17 23:36:52.444118 systemd-logind[1959]: New seat seat0. Apr 17 23:36:52.447189 systemd[1]: Started systemd-logind.service - User Login Management. Apr 17 23:36:52.553058 coreos-metadata[1949]: Apr 17 23:36:52.552 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.555 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.566 INFO Fetch successful Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.566 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.566 INFO Fetch successful Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.566 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.569 INFO Fetch successful Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.569 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.570 INFO Fetch successful Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.570 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.572 INFO Fetch failed with 404: resource not found Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.572 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.575 INFO Fetch successful Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.575 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.577 INFO Fetch successful Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.577 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.578 INFO Fetch successful Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.578 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.579 INFO Fetch successful Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.579 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 17 23:36:52.586289 coreos-metadata[1949]: Apr 17 23:36:52.581 INFO Fetch successful Apr 17 23:36:52.608650 dbus-daemon[1950]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 17 23:36:52.610041 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 17 23:36:52.629969 dbus-daemon[1950]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1987 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 17 23:36:52.652761 systemd[1]: Starting polkit.service - Authorization Manager... Apr 17 23:36:52.664077 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (1749) Apr 17 23:36:52.665533 bash[2025]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:36:52.670991 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 17 23:36:52.687258 systemd[1]: Starting sshkeys.service... Apr 17 23:36:52.703986 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 17 23:36:52.723896 extend-filesystems[2003]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 17 23:36:52.723896 extend-filesystems[2003]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 17 23:36:52.723896 extend-filesystems[2003]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 17 23:36:52.734656 extend-filesystems[1952]: Resized filesystem in /dev/nvme0n1p9 Apr 17 23:36:52.730398 polkitd[2033]: Started polkitd version 121 Apr 17 23:36:52.726325 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 17 23:36:52.726566 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 17 23:36:52.744005 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 17 23:36:52.750620 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 17 23:36:52.788494 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 17 23:36:52.793208 polkitd[2033]: Loading rules from directory /etc/polkit-1/rules.d Apr 17 23:36:52.793299 polkitd[2033]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 17 23:36:52.800286 polkitd[2033]: Finished loading, compiling and executing 2 rules Apr 17 23:36:52.800618 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 17 23:36:52.814386 dbus-daemon[1950]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 17 23:36:52.815757 sshd_keygen[1986]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 17 23:36:52.816110 polkitd[2033]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 17 23:36:52.820801 systemd[1]: Started polkit.service - Authorization Manager. Apr 17 23:36:52.877997 systemd-hostnamed[1987]: Hostname set to (transient) Apr 17 23:36:52.882000 systemd-resolved[1904]: System hostname changed to 'ip-172-31-16-109'. Apr 17 23:36:52.897059 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 17 23:36:52.907025 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 17 23:36:52.952507 systemd[1]: issuegen.service: Deactivated successfully. Apr 17 23:36:52.952768 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 17 23:36:52.964360 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 17 23:36:53.052878 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 17 23:36:53.062457 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 17 23:36:53.067101 locksmithd[1993]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 17 23:36:53.074488 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 17 23:36:53.076377 systemd[1]: Reached target getty.target - Login Prompts. Apr 17 23:36:53.088741 coreos-metadata[2074]: Apr 17 23:36:53.087 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 17 23:36:53.088741 coreos-metadata[2074]: Apr 17 23:36:53.088 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 17 23:36:53.090253 coreos-metadata[2074]: Apr 17 23:36:53.089 INFO Fetch successful Apr 17 23:36:53.090253 coreos-metadata[2074]: Apr 17 23:36:53.089 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 17 23:36:53.091870 coreos-metadata[2074]: Apr 17 23:36:53.091 INFO Fetch successful Apr 17 23:36:53.095864 unknown[2074]: wrote ssh authorized keys file for user: core Apr 17 23:36:53.157189 update-ssh-keys[2146]: Updated "/home/core/.ssh/authorized_keys" Apr 17 23:36:53.158824 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 17 23:36:53.167498 systemd[1]: Finished sshkeys.service. Apr 17 23:36:53.199012 systemd-networkd[1902]: eth0: Gained IPv6LL Apr 17 23:36:53.221104 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 17 23:36:53.224128 systemd[1]: Reached target network-online.target - Network is Online. Apr 17 23:36:53.234379 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 17 23:36:53.248180 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:36:53.255547 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 17 23:36:53.279497 containerd[1982]: time="2026-04-17T23:36:53.279389886Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 17 23:36:53.367023 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 17 23:36:53.387833 amazon-ssm-agent[2158]: Initializing new seelog logger Apr 17 23:36:53.387833 amazon-ssm-agent[2158]: New Seelog Logger Creation Complete Apr 17 23:36:53.387833 amazon-ssm-agent[2158]: 2026/04/17 23:36:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:36:53.387833 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:36:53.387833 amazon-ssm-agent[2158]: 2026/04/17 23:36:53 processing appconfig overrides Apr 17 23:36:53.388375 amazon-ssm-agent[2158]: 2026/04/17 23:36:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:36:53.388375 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:36:53.388375 amazon-ssm-agent[2158]: 2026/04/17 23:36:53 processing appconfig overrides Apr 17 23:36:53.388869 amazon-ssm-agent[2158]: 2026/04/17 23:36:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:36:53.388869 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:36:53.388869 amazon-ssm-agent[2158]: 2026/04/17 23:36:53 processing appconfig overrides Apr 17 23:36:53.392708 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO Proxy environment variables: Apr 17 23:36:53.394711 containerd[1982]: time="2026-04-17T23:36:53.394663875Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:36:53.396646 amazon-ssm-agent[2158]: 2026/04/17 23:36:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:36:53.396646 amazon-ssm-agent[2158]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 17 23:36:53.396774 amazon-ssm-agent[2158]: 2026/04/17 23:36:53 processing appconfig overrides Apr 17 23:36:53.404849 containerd[1982]: time="2026-04-17T23:36:53.404791200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:36:53.405475 containerd[1982]: time="2026-04-17T23:36:53.405441531Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 17 23:36:53.405598 containerd[1982]: time="2026-04-17T23:36:53.405580798Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 17 23:36:53.405875 containerd[1982]: time="2026-04-17T23:36:53.405848123Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 17 23:36:53.406369 containerd[1982]: time="2026-04-17T23:36:53.406344340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 17 23:36:53.406898 containerd[1982]: time="2026-04-17T23:36:53.406870816Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:36:53.407004 containerd[1982]: time="2026-04-17T23:36:53.406988876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:36:53.407363 containerd[1982]: time="2026-04-17T23:36:53.407301644Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:36:53.407980 containerd[1982]: time="2026-04-17T23:36:53.407957314Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 17 23:36:53.408268 containerd[1982]: time="2026-04-17T23:36:53.408244003Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:36:53.408343 containerd[1982]: time="2026-04-17T23:36:53.408326441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 17 23:36:53.408535 containerd[1982]: time="2026-04-17T23:36:53.408517758Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:36:53.410124 containerd[1982]: time="2026-04-17T23:36:53.409257692Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 17 23:36:53.410124 containerd[1982]: time="2026-04-17T23:36:53.409448352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 17 23:36:53.410124 containerd[1982]: time="2026-04-17T23:36:53.409470487Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 17 23:36:53.410124 containerd[1982]: time="2026-04-17T23:36:53.409584829Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 17 23:36:53.410124 containerd[1982]: time="2026-04-17T23:36:53.409649039Z" level=info msg="metadata content store policy set" policy=shared Apr 17 23:36:53.419638 containerd[1982]: time="2026-04-17T23:36:53.419557939Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 17 23:36:53.419797 containerd[1982]: time="2026-04-17T23:36:53.419778853Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 17 23:36:53.419908 containerd[1982]: time="2026-04-17T23:36:53.419892427Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 17 23:36:53.420018 containerd[1982]: time="2026-04-17T23:36:53.420002498Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 17 23:36:53.420941 containerd[1982]: time="2026-04-17T23:36:53.420100505Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 17 23:36:53.420941 containerd[1982]: time="2026-04-17T23:36:53.420301360Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 17 23:36:53.420941 containerd[1982]: time="2026-04-17T23:36:53.420660924Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 17 23:36:53.420941 containerd[1982]: time="2026-04-17T23:36:53.420823749Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 17 23:36:53.420941 containerd[1982]: time="2026-04-17T23:36:53.420848680Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 17 23:36:53.420941 containerd[1982]: time="2026-04-17T23:36:53.420874506Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 17 23:36:53.420941 containerd[1982]: time="2026-04-17T23:36:53.420899239Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 17 23:36:53.423573 containerd[1982]: time="2026-04-17T23:36:53.422975055Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 17 23:36:53.423573 containerd[1982]: time="2026-04-17T23:36:53.423021624Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 17 23:36:53.423573 containerd[1982]: time="2026-04-17T23:36:53.423046483Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 17 23:36:53.423573 containerd[1982]: time="2026-04-17T23:36:53.423069847Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 17 23:36:53.423573 containerd[1982]: time="2026-04-17T23:36:53.423093890Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 17 23:36:53.423573 containerd[1982]: time="2026-04-17T23:36:53.423112693Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 17 23:36:53.423573 containerd[1982]: time="2026-04-17T23:36:53.423131155Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 17 23:36:53.423573 containerd[1982]: time="2026-04-17T23:36:53.423161800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 17 23:36:53.423573 containerd[1982]: time="2026-04-17T23:36:53.423184011Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 17 23:36:53.423573 containerd[1982]: time="2026-04-17T23:36:53.423203224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 17 23:36:53.423573 containerd[1982]: time="2026-04-17T23:36:53.423224836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 17 23:36:53.423573 containerd[1982]: time="2026-04-17T23:36:53.423244618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 17 23:36:53.423573 containerd[1982]: time="2026-04-17T23:36:53.423264905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 17 23:36:53.423573 containerd[1982]: time="2026-04-17T23:36:53.423282960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 17 23:36:53.424151 containerd[1982]: time="2026-04-17T23:36:53.423302809Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 17 23:36:53.424151 containerd[1982]: time="2026-04-17T23:36:53.423322009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 17 23:36:53.424151 containerd[1982]: time="2026-04-17T23:36:53.423354976Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 17 23:36:53.424151 containerd[1982]: time="2026-04-17T23:36:53.423373970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 17 23:36:53.424151 containerd[1982]: time="2026-04-17T23:36:53.423391562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 17 23:36:53.424151 containerd[1982]: time="2026-04-17T23:36:53.423410928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 17 23:36:53.424151 containerd[1982]: time="2026-04-17T23:36:53.423435227Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 17 23:36:53.424151 containerd[1982]: time="2026-04-17T23:36:53.423476861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 17 23:36:53.424151 containerd[1982]: time="2026-04-17T23:36:53.423497023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 17 23:36:53.424151 containerd[1982]: time="2026-04-17T23:36:53.423513879Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 17 23:36:53.427997 containerd[1982]: time="2026-04-17T23:36:53.426145398Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 17 23:36:53.427997 containerd[1982]: time="2026-04-17T23:36:53.426191408Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 17 23:36:53.427997 containerd[1982]: time="2026-04-17T23:36:53.426210044Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 17 23:36:53.427997 containerd[1982]: time="2026-04-17T23:36:53.426230557Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 17 23:36:53.427997 containerd[1982]: time="2026-04-17T23:36:53.426246457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 17 23:36:53.427997 containerd[1982]: time="2026-04-17T23:36:53.426266512Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 17 23:36:53.427997 containerd[1982]: time="2026-04-17T23:36:53.426282439Z" level=info msg="NRI interface is disabled by configuration." Apr 17 23:36:53.427997 containerd[1982]: time="2026-04-17T23:36:53.426297383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 17 23:36:53.428357 containerd[1982]: time="2026-04-17T23:36:53.426709400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 17 23:36:53.428357 containerd[1982]: time="2026-04-17T23:36:53.426802065Z" level=info msg="Connect containerd service" Apr 17 23:36:53.428357 containerd[1982]: time="2026-04-17T23:36:53.426856692Z" level=info msg="using legacy CRI server" Apr 17 23:36:53.428357 containerd[1982]: time="2026-04-17T23:36:53.426866739Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 17 23:36:53.428357 containerd[1982]: time="2026-04-17T23:36:53.427041135Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 17 23:36:53.428357 containerd[1982]: time="2026-04-17T23:36:53.427803954Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 17 23:36:53.432721 containerd[1982]: time="2026-04-17T23:36:53.432149496Z" level=info msg="Start subscribing containerd event" Apr 17 23:36:53.432721 containerd[1982]: time="2026-04-17T23:36:53.432228199Z" level=info msg="Start recovering state" Apr 17 23:36:53.432721 containerd[1982]: time="2026-04-17T23:36:53.432322060Z" level=info msg="Start event monitor" Apr 17 23:36:53.432721 containerd[1982]: time="2026-04-17T23:36:53.432351156Z" level=info msg="Start snapshots syncer" Apr 17 23:36:53.432721 containerd[1982]: time="2026-04-17T23:36:53.432367468Z" level=info msg="Start cni network conf syncer for default" Apr 17 23:36:53.432721 containerd[1982]: time="2026-04-17T23:36:53.432378675Z" level=info msg="Start streaming server" Apr 17 23:36:53.433243 containerd[1982]: time="2026-04-17T23:36:53.433220723Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 17 23:36:53.434535 containerd[1982]: time="2026-04-17T23:36:53.434496517Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 17 23:36:53.447964 containerd[1982]: time="2026-04-17T23:36:53.445879758Z" level=info msg="containerd successfully booted in 0.170996s" Apr 17 23:36:53.446007 systemd[1]: Started containerd.service - containerd container runtime. Apr 17 23:36:53.493992 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO https_proxy: Apr 17 23:36:53.596471 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO http_proxy: Apr 17 23:36:53.668015 tar[1972]: linux-amd64/README.md Apr 17 23:36:53.688723 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 17 23:36:53.694503 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO no_proxy: Apr 17 23:36:53.794003 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO Checking if agent identity type OnPrem can be assumed Apr 17 23:36:53.892253 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO Checking if agent identity type EC2 can be assumed Apr 17 23:36:53.980537 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO Agent will take identity from EC2 Apr 17 23:36:53.980537 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:36:53.980537 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:36:53.980537 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 17 23:36:53.980537 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 17 23:36:53.980537 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 17 23:36:53.980537 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO [amazon-ssm-agent] Starting Core Agent Apr 17 23:36:53.980537 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 17 23:36:53.980537 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO [Registrar] Starting registrar module Apr 17 23:36:53.980537 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 17 23:36:53.980537 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO [EC2Identity] EC2 registration was successful. Apr 17 23:36:53.980948 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO [CredentialRefresher] credentialRefresher has started Apr 17 23:36:53.980948 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO [CredentialRefresher] Starting credentials refresher loop Apr 17 23:36:53.980948 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 17 23:36:53.991233 amazon-ssm-agent[2158]: 2026-04-17 23:36:53 INFO [CredentialRefresher] Next credential rotation will be in 30.49999348875 minutes Apr 17 23:36:54.994933 amazon-ssm-agent[2158]: 2026-04-17 23:36:54 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 17 23:36:55.095474 amazon-ssm-agent[2158]: 2026-04-17 23:36:54 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2193) started Apr 17 23:36:55.196956 amazon-ssm-agent[2158]: 2026-04-17 23:36:54 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 17 23:36:55.264454 ntpd[1954]: Listen normally on 6 eth0 [fe80::429:88ff:fe8a:86df%2]:123 Apr 17 23:36:55.265122 ntpd[1954]: 17 Apr 23:36:55 ntpd[1954]: Listen normally on 6 eth0 [fe80::429:88ff:fe8a:86df%2]:123 Apr 17 23:36:55.787253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:36:55.788538 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 17 23:36:55.790070 systemd[1]: Startup finished in 616ms (kernel) + 6.496s (initrd) + 7.634s (userspace) = 14.748s. Apr 17 23:36:55.807430 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:36:56.901455 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 17 23:36:56.909254 systemd[1]: Started sshd@0-172.31.16.109:22-20.229.252.112:47920.service - OpenSSH per-connection server daemon (20.229.252.112:47920). Apr 17 23:36:56.949452 kubelet[2208]: E0417 23:36:56.949412 2208 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:36:56.952119 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:36:56.952333 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:36:56.952671 systemd[1]: kubelet.service: Consumed 1.052s CPU time. Apr 17 23:36:57.955896 sshd[2219]: Accepted publickey for core from 20.229.252.112 port 47920 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:36:57.958249 sshd[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:36:57.968099 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 17 23:36:57.980717 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 17 23:36:57.983547 systemd-logind[1959]: New session 1 of user core. Apr 17 23:36:57.995690 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 17 23:36:58.002719 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 17 23:36:58.016911 (systemd)[2224]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 17 23:36:58.134700 systemd[2224]: Queued start job for default target default.target. Apr 17 23:36:58.143237 systemd[2224]: Created slice app.slice - User Application Slice. Apr 17 23:36:58.143281 systemd[2224]: Reached target paths.target - Paths. Apr 17 23:36:58.143303 systemd[2224]: Reached target timers.target - Timers. Apr 17 23:36:58.144727 systemd[2224]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 17 23:36:58.157094 systemd[2224]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 17 23:36:58.157219 systemd[2224]: Reached target sockets.target - Sockets. Apr 17 23:36:58.157234 systemd[2224]: Reached target basic.target - Basic System. Apr 17 23:36:58.157272 systemd[2224]: Reached target default.target - Main User Target. Apr 17 23:36:58.157302 systemd[2224]: Startup finished in 133ms. Apr 17 23:36:58.157391 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 17 23:36:58.165161 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 17 23:36:58.883146 systemd[1]: Started sshd@1-172.31.16.109:22-20.229.252.112:47936.service - OpenSSH per-connection server daemon (20.229.252.112:47936). Apr 17 23:37:00.060560 systemd-resolved[1904]: Clock change detected. Flushing caches. Apr 17 23:37:00.700339 sshd[2235]: Accepted publickey for core from 20.229.252.112 port 47936 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:00.702046 sshd[2235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:00.707303 systemd-logind[1959]: New session 2 of user core. Apr 17 23:37:00.713954 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 17 23:37:01.409660 sshd[2235]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:01.414150 systemd-logind[1959]: Session 2 logged out. Waiting for processes to exit. Apr 17 23:37:01.415205 systemd[1]: sshd@1-172.31.16.109:22-20.229.252.112:47936.service: Deactivated successfully. Apr 17 23:37:01.417304 systemd[1]: session-2.scope: Deactivated successfully. Apr 17 23:37:01.418365 systemd-logind[1959]: Removed session 2. Apr 17 23:37:01.586477 systemd[1]: Started sshd@2-172.31.16.109:22-20.229.252.112:47944.service - OpenSSH per-connection server daemon (20.229.252.112:47944). Apr 17 23:37:02.594075 sshd[2242]: Accepted publickey for core from 20.229.252.112 port 47944 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:02.595624 sshd[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:02.600815 systemd-logind[1959]: New session 3 of user core. Apr 17 23:37:02.606974 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 17 23:37:03.290053 sshd[2242]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:03.294256 systemd[1]: sshd@2-172.31.16.109:22-20.229.252.112:47944.service: Deactivated successfully. Apr 17 23:37:03.296567 systemd[1]: session-3.scope: Deactivated successfully. Apr 17 23:37:03.297473 systemd-logind[1959]: Session 3 logged out. Waiting for processes to exit. Apr 17 23:37:03.298545 systemd-logind[1959]: Removed session 3. Apr 17 23:37:03.460063 systemd[1]: Started sshd@3-172.31.16.109:22-20.229.252.112:47954.service - OpenSSH per-connection server daemon (20.229.252.112:47954). Apr 17 23:37:04.433097 sshd[2249]: Accepted publickey for core from 20.229.252.112 port 47954 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:04.434742 sshd[2249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:04.439763 systemd-logind[1959]: New session 4 of user core. Apr 17 23:37:04.446930 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 17 23:37:05.110369 sshd[2249]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:05.114532 systemd[1]: sshd@3-172.31.16.109:22-20.229.252.112:47954.service: Deactivated successfully. Apr 17 23:37:05.116356 systemd[1]: session-4.scope: Deactivated successfully. Apr 17 23:37:05.117090 systemd-logind[1959]: Session 4 logged out. Waiting for processes to exit. Apr 17 23:37:05.118461 systemd-logind[1959]: Removed session 4. Apr 17 23:37:05.292912 systemd[1]: Started sshd@4-172.31.16.109:22-20.229.252.112:53320.service - OpenSSH per-connection server daemon (20.229.252.112:53320). Apr 17 23:37:06.322407 sshd[2256]: Accepted publickey for core from 20.229.252.112 port 53320 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:06.323097 sshd[2256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:06.328056 systemd-logind[1959]: New session 5 of user core. Apr 17 23:37:06.334955 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 17 23:37:06.901976 sudo[2259]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 17 23:37:06.902378 sudo[2259]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:37:06.914605 sudo[2259]: pam_unix(sudo:session): session closed for user root Apr 17 23:37:07.081146 sshd[2256]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:07.085016 systemd[1]: sshd@4-172.31.16.109:22-20.229.252.112:53320.service: Deactivated successfully. Apr 17 23:37:07.087048 systemd[1]: session-5.scope: Deactivated successfully. Apr 17 23:37:07.088542 systemd-logind[1959]: Session 5 logged out. Waiting for processes to exit. Apr 17 23:37:07.090125 systemd-logind[1959]: Removed session 5. Apr 17 23:37:07.242200 systemd[1]: Started sshd@5-172.31.16.109:22-20.229.252.112:53324.service - OpenSSH per-connection server daemon (20.229.252.112:53324). Apr 17 23:37:07.850663 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 17 23:37:07.858070 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:37:08.048963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:37:08.060125 (kubelet)[2274]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:37:08.105071 kubelet[2274]: E0417 23:37:08.104941 2274 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:37:08.109055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:37:08.109259 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:37:08.229134 sshd[2264]: Accepted publickey for core from 20.229.252.112 port 53324 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:08.230770 sshd[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:08.236041 systemd-logind[1959]: New session 6 of user core. Apr 17 23:37:08.246935 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 17 23:37:08.749864 sudo[2283]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 17 23:37:08.750253 sudo[2283]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:37:08.754071 sudo[2283]: pam_unix(sudo:session): session closed for user root Apr 17 23:37:08.759419 sudo[2282]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 17 23:37:08.759871 sudo[2282]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:37:08.776117 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 17 23:37:08.778107 auditctl[2286]: No rules Apr 17 23:37:08.778525 systemd[1]: audit-rules.service: Deactivated successfully. Apr 17 23:37:08.778766 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 17 23:37:08.781591 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 17 23:37:08.811667 augenrules[2304]: No rules Apr 17 23:37:08.813104 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 17 23:37:08.814727 sudo[2282]: pam_unix(sudo:session): session closed for user root Apr 17 23:37:08.974046 sshd[2264]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:08.977259 systemd[1]: sshd@5-172.31.16.109:22-20.229.252.112:53324.service: Deactivated successfully. Apr 17 23:37:08.979324 systemd[1]: session-6.scope: Deactivated successfully. Apr 17 23:37:08.981154 systemd-logind[1959]: Session 6 logged out. Waiting for processes to exit. Apr 17 23:37:08.982366 systemd-logind[1959]: Removed session 6. Apr 17 23:37:09.142912 systemd[1]: Started sshd@6-172.31.16.109:22-20.229.252.112:53334.service - OpenSSH per-connection server daemon (20.229.252.112:53334). Apr 17 23:37:10.122099 sshd[2312]: Accepted publickey for core from 20.229.252.112 port 53334 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:37:10.123530 sshd[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:37:10.128586 systemd-logind[1959]: New session 7 of user core. Apr 17 23:37:10.134944 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 17 23:37:10.643957 sudo[2315]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 17 23:37:10.644350 sudo[2315]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 17 23:37:11.145098 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 17 23:37:11.147312 (dockerd)[2331]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 17 23:37:11.738712 dockerd[2331]: time="2026-04-17T23:37:11.738650140Z" level=info msg="Starting up" Apr 17 23:37:11.910349 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3232926449-merged.mount: Deactivated successfully. Apr 17 23:37:11.946150 dockerd[2331]: time="2026-04-17T23:37:11.945890918Z" level=info msg="Loading containers: start." Apr 17 23:37:12.101718 kernel: Initializing XFRM netlink socket Apr 17 23:37:12.155162 (udev-worker)[2353]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:37:12.214253 systemd-networkd[1902]: docker0: Link UP Apr 17 23:37:12.237245 dockerd[2331]: time="2026-04-17T23:37:12.237192262Z" level=info msg="Loading containers: done." Apr 17 23:37:12.273639 dockerd[2331]: time="2026-04-17T23:37:12.273585095Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 17 23:37:12.273975 dockerd[2331]: time="2026-04-17T23:37:12.273742678Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 17 23:37:12.273975 dockerd[2331]: time="2026-04-17T23:37:12.273889616Z" level=info msg="Daemon has completed initialization" Apr 17 23:37:12.323324 dockerd[2331]: time="2026-04-17T23:37:12.323259849Z" level=info msg="API listen on /run/docker.sock" Apr 17 23:37:12.323869 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 17 23:37:12.905055 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3140765905-merged.mount: Deactivated successfully. Apr 17 23:37:13.356763 containerd[1982]: time="2026-04-17T23:37:13.356716725Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 17 23:37:13.922361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3795259543.mount: Deactivated successfully. Apr 17 23:37:16.083805 containerd[1982]: time="2026-04-17T23:37:16.083753348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:16.085778 containerd[1982]: time="2026-04-17T23:37:16.085719698Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193989" Apr 17 23:37:16.088233 containerd[1982]: time="2026-04-17T23:37:16.088155800Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:16.092722 containerd[1982]: time="2026-04-17T23:37:16.092582247Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:16.093981 containerd[1982]: time="2026-04-17T23:37:16.093943335Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 2.73718766s" Apr 17 23:37:16.094257 containerd[1982]: time="2026-04-17T23:37:16.094099872Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 17 23:37:16.095230 containerd[1982]: time="2026-04-17T23:37:16.095196420Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 17 23:37:17.964352 containerd[1982]: time="2026-04-17T23:37:17.964294552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:17.965951 containerd[1982]: time="2026-04-17T23:37:17.965901733Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171447" Apr 17 23:37:17.967610 containerd[1982]: time="2026-04-17T23:37:17.967327406Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:17.971268 containerd[1982]: time="2026-04-17T23:37:17.971191449Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:17.972817 containerd[1982]: time="2026-04-17T23:37:17.972474826Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.877243232s" Apr 17 23:37:17.972817 containerd[1982]: time="2026-04-17T23:37:17.972537157Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 17 23:37:17.973507 containerd[1982]: time="2026-04-17T23:37:17.973480551Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 17 23:37:18.350869 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 17 23:37:18.355970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:37:18.559134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:37:18.564897 (kubelet)[2541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:37:18.607738 kubelet[2541]: E0417 23:37:18.607262 2541 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:37:18.611328 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:37:18.611528 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:37:19.667826 containerd[1982]: time="2026-04-17T23:37:19.667771554Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:19.670138 containerd[1982]: time="2026-04-17T23:37:19.670071718Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289756" Apr 17 23:37:19.672829 containerd[1982]: time="2026-04-17T23:37:19.672768137Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:19.677135 containerd[1982]: time="2026-04-17T23:37:19.677071493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:19.678377 containerd[1982]: time="2026-04-17T23:37:19.678340397Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.704818818s" Apr 17 23:37:19.678683 containerd[1982]: time="2026-04-17T23:37:19.678498403Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 17 23:37:19.679419 containerd[1982]: time="2026-04-17T23:37:19.679280247Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 17 23:37:20.799307 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1807726614.mount: Deactivated successfully. Apr 17 23:37:21.384956 containerd[1982]: time="2026-04-17T23:37:21.384896501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:21.387001 containerd[1982]: time="2026-04-17T23:37:21.386829751Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010711" Apr 17 23:37:21.389362 containerd[1982]: time="2026-04-17T23:37:21.389294184Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:21.392818 containerd[1982]: time="2026-04-17T23:37:21.392759880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:21.393655 containerd[1982]: time="2026-04-17T23:37:21.393486992Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.714169648s" Apr 17 23:37:21.393655 containerd[1982]: time="2026-04-17T23:37:21.393535433Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 17 23:37:21.394336 containerd[1982]: time="2026-04-17T23:37:21.394310154Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 17 23:37:21.954893 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2310128436.mount: Deactivated successfully. Apr 17 23:37:23.372749 containerd[1982]: time="2026-04-17T23:37:23.372673789Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:23.374604 containerd[1982]: time="2026-04-17T23:37:23.374531167Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Apr 17 23:37:23.376984 containerd[1982]: time="2026-04-17T23:37:23.376528241Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:23.380766 containerd[1982]: time="2026-04-17T23:37:23.380704370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:23.382249 containerd[1982]: time="2026-04-17T23:37:23.382088497Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.987744389s" Apr 17 23:37:23.382249 containerd[1982]: time="2026-04-17T23:37:23.382131050Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 17 23:37:23.383232 containerd[1982]: time="2026-04-17T23:37:23.383209099Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 17 23:37:23.703198 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 17 23:37:23.797278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2322460023.mount: Deactivated successfully. Apr 17 23:37:23.802045 containerd[1982]: time="2026-04-17T23:37:23.801999507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:23.803014 containerd[1982]: time="2026-04-17T23:37:23.802957257Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 17 23:37:23.804341 containerd[1982]: time="2026-04-17T23:37:23.804276647Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:23.806836 containerd[1982]: time="2026-04-17T23:37:23.806775211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:23.808443 containerd[1982]: time="2026-04-17T23:37:23.807651981Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 424.407247ms" Apr 17 23:37:23.808443 containerd[1982]: time="2026-04-17T23:37:23.807708998Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 17 23:37:23.808443 containerd[1982]: time="2026-04-17T23:37:23.808326134Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 17 23:37:24.271905 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1178402606.mount: Deactivated successfully. Apr 17 23:37:25.460272 containerd[1982]: time="2026-04-17T23:37:25.460209635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:25.462208 containerd[1982]: time="2026-04-17T23:37:25.462135628Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719426" Apr 17 23:37:25.464493 containerd[1982]: time="2026-04-17T23:37:25.464426436Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:25.468895 containerd[1982]: time="2026-04-17T23:37:25.468850106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:25.470971 containerd[1982]: time="2026-04-17T23:37:25.470397011Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.662040913s" Apr 17 23:37:25.470971 containerd[1982]: time="2026-04-17T23:37:25.470443386Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 17 23:37:28.850915 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 17 23:37:28.861939 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:37:29.195903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:37:29.206377 (kubelet)[2711]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 17 23:37:29.261605 kubelet[2711]: E0417 23:37:29.261551 2711 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 17 23:37:29.264994 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 17 23:37:29.265367 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 17 23:37:29.405077 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:37:29.411061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:37:29.455352 systemd[1]: Reloading requested from client PID 2726 ('systemctl') (unit session-7.scope)... Apr 17 23:37:29.456000 systemd[1]: Reloading... Apr 17 23:37:29.581752 zram_generator::config[2769]: No configuration found. Apr 17 23:37:29.730805 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:37:29.816710 systemd[1]: Reloading finished in 360 ms. Apr 17 23:37:29.877873 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 17 23:37:29.877989 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 17 23:37:29.878384 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:37:29.880327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:37:30.094262 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:37:30.101132 (kubelet)[2829]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:37:30.156477 kubelet[2829]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:37:30.156477 kubelet[2829]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:37:30.156477 kubelet[2829]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:37:30.159950 kubelet[2829]: I0417 23:37:30.156538 2829 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:37:30.989204 kubelet[2829]: I0417 23:37:30.988650 2829 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:37:30.989204 kubelet[2829]: I0417 23:37:30.988835 2829 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:37:30.989204 kubelet[2829]: I0417 23:37:30.989169 2829 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:37:31.034877 kubelet[2829]: I0417 23:37:31.034834 2829 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:37:31.038775 kubelet[2829]: E0417 23:37:31.038737 2829 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.109:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:37:31.045981 kubelet[2829]: E0417 23:37:31.045941 2829 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:37:31.045981 kubelet[2829]: I0417 23:37:31.045973 2829 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:37:31.054724 kubelet[2829]: I0417 23:37:31.054682 2829 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:37:31.059314 kubelet[2829]: I0417 23:37:31.059260 2829 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:37:31.065205 kubelet[2829]: I0417 23:37:31.059304 2829 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-109","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:37:31.065205 kubelet[2829]: I0417 23:37:31.065205 2829 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:37:31.065438 kubelet[2829]: I0417 23:37:31.065224 2829 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:37:31.065438 kubelet[2829]: I0417 23:37:31.065393 2829 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:37:31.072928 kubelet[2829]: I0417 23:37:31.072712 2829 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:37:31.072928 kubelet[2829]: I0417 23:37:31.072820 2829 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:37:31.072928 kubelet[2829]: I0417 23:37:31.072869 2829 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:37:31.075146 kubelet[2829]: I0417 23:37:31.075111 2829 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:37:31.078437 kubelet[2829]: E0417 23:37:31.078392 2829 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-109&limit=500&resourceVersion=0\": dial tcp 172.31.16.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:37:31.080125 kubelet[2829]: E0417 23:37:31.079995 2829 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:37:31.083793 kubelet[2829]: I0417 23:37:31.083727 2829 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:37:31.084399 kubelet[2829]: I0417 23:37:31.084366 2829 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:37:31.086294 kubelet[2829]: W0417 23:37:31.086261 2829 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 17 23:37:31.095921 kubelet[2829]: I0417 23:37:31.095888 2829 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:37:31.096047 kubelet[2829]: I0417 23:37:31.095950 2829 server.go:1289] "Started kubelet" Apr 17 23:37:31.099459 kubelet[2829]: I0417 23:37:31.099421 2829 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:37:31.107725 kubelet[2829]: I0417 23:37:31.106639 2829 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:37:31.108791 kubelet[2829]: I0417 23:37:31.108767 2829 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:37:31.109059 kubelet[2829]: E0417 23:37:31.109030 2829 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-16-109\" not found" Apr 17 23:37:31.109846 kubelet[2829]: I0417 23:37:31.109827 2829 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:37:31.109918 kubelet[2829]: I0417 23:37:31.109887 2829 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:37:31.112654 kubelet[2829]: E0417 23:37:31.110281 2829 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.109:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.109:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-109.18a74930990c2398 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-109,UID:ip-172-31-16-109,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-109,},FirstTimestamp:2026-04-17 23:37:31.095917464 +0000 UTC m=+0.989384406,LastTimestamp:2026-04-17 23:37:31.095917464 +0000 UTC m=+0.989384406,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-109,}" Apr 17 23:37:31.113254 kubelet[2829]: I0417 23:37:31.113233 2829 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:37:31.113340 kubelet[2829]: I0417 23:37:31.113318 2829 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:37:31.113615 kubelet[2829]: E0417 23:37:31.113591 2829 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:37:31.113852 kubelet[2829]: I0417 23:37:31.113826 2829 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:37:31.115804 kubelet[2829]: I0417 23:37:31.115732 2829 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:37:31.116012 kubelet[2829]: I0417 23:37:31.115998 2829 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:37:31.119367 kubelet[2829]: E0417 23:37:31.113678 2829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-109?timeout=10s\": dial tcp 172.31.16.109:6443: connect: connection refused" interval="200ms" Apr 17 23:37:31.119367 kubelet[2829]: I0417 23:37:31.117398 2829 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:37:31.122527 kubelet[2829]: I0417 23:37:31.122470 2829 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:37:31.144299 kubelet[2829]: I0417 23:37:31.144250 2829 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:37:31.148101 kubelet[2829]: I0417 23:37:31.147801 2829 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:37:31.148101 kubelet[2829]: I0417 23:37:31.147827 2829 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:37:31.148101 kubelet[2829]: I0417 23:37:31.147848 2829 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:37:31.148101 kubelet[2829]: I0417 23:37:31.147859 2829 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:37:31.148101 kubelet[2829]: E0417 23:37:31.147902 2829 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:37:31.151217 kubelet[2829]: I0417 23:37:31.151192 2829 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:37:31.151217 kubelet[2829]: I0417 23:37:31.151214 2829 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:37:31.151378 kubelet[2829]: I0417 23:37:31.151232 2829 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:37:31.152631 kubelet[2829]: E0417 23:37:31.152599 2829 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:37:31.155725 kubelet[2829]: I0417 23:37:31.155702 2829 policy_none.go:49] "None policy: Start" Apr 17 23:37:31.155804 kubelet[2829]: I0417 23:37:31.155735 2829 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:37:31.155804 kubelet[2829]: I0417 23:37:31.155749 2829 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:37:31.164049 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 17 23:37:31.176607 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 17 23:37:31.180587 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 17 23:37:31.191681 kubelet[2829]: E0417 23:37:31.191659 2829 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:37:31.193996 kubelet[2829]: I0417 23:37:31.192665 2829 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:37:31.193996 kubelet[2829]: I0417 23:37:31.192680 2829 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:37:31.193996 kubelet[2829]: I0417 23:37:31.193019 2829 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:37:31.195391 kubelet[2829]: E0417 23:37:31.195354 2829 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:37:31.195559 kubelet[2829]: E0417 23:37:31.195539 2829 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-109\" not found" Apr 17 23:37:31.262061 systemd[1]: Created slice kubepods-burstable-poda648033ad4ca395f8bc8fe85e7f5a5c9.slice - libcontainer container kubepods-burstable-poda648033ad4ca395f8bc8fe85e7f5a5c9.slice. Apr 17 23:37:31.274933 kubelet[2829]: E0417 23:37:31.274664 2829 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-109\" not found" node="ip-172-31-16-109" Apr 17 23:37:31.279002 systemd[1]: Created slice kubepods-burstable-podc32b952f79e4a664dd1d2862c20c8d4d.slice - libcontainer container kubepods-burstable-podc32b952f79e4a664dd1d2862c20c8d4d.slice. Apr 17 23:37:31.291407 kubelet[2829]: E0417 23:37:31.291355 2829 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-109\" not found" node="ip-172-31-16-109" Apr 17 23:37:31.295361 kubelet[2829]: I0417 23:37:31.295303 2829 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-109" Apr 17 23:37:31.296292 kubelet[2829]: E0417 23:37:31.296260 2829 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.109:6443/api/v1/nodes\": dial tcp 172.31.16.109:6443: connect: connection refused" node="ip-172-31-16-109" Apr 17 23:37:31.296377 systemd[1]: Created slice kubepods-burstable-podf4941e5e4ff7b2332301ce3985157b03.slice - libcontainer container kubepods-burstable-podf4941e5e4ff7b2332301ce3985157b03.slice. Apr 17 23:37:31.298140 kubelet[2829]: E0417 23:37:31.298114 2829 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-109\" not found" node="ip-172-31-16-109" Apr 17 23:37:31.311400 kubelet[2829]: I0417 23:37:31.311354 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4941e5e4ff7b2332301ce3985157b03-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-109\" (UID: \"f4941e5e4ff7b2332301ce3985157b03\") " pod="kube-system/kube-scheduler-ip-172-31-16-109" Apr 17 23:37:31.311400 kubelet[2829]: I0417 23:37:31.311399 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a648033ad4ca395f8bc8fe85e7f5a5c9-ca-certs\") pod \"kube-apiserver-ip-172-31-16-109\" (UID: \"a648033ad4ca395f8bc8fe85e7f5a5c9\") " pod="kube-system/kube-apiserver-ip-172-31-16-109" Apr 17 23:37:31.311675 kubelet[2829]: I0417 23:37:31.311424 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a648033ad4ca395f8bc8fe85e7f5a5c9-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-109\" (UID: \"a648033ad4ca395f8bc8fe85e7f5a5c9\") " pod="kube-system/kube-apiserver-ip-172-31-16-109" Apr 17 23:37:31.311675 kubelet[2829]: I0417 23:37:31.311445 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c32b952f79e4a664dd1d2862c20c8d4d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"c32b952f79e4a664dd1d2862c20c8d4d\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Apr 17 23:37:31.311675 kubelet[2829]: I0417 23:37:31.311472 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c32b952f79e4a664dd1d2862c20c8d4d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"c32b952f79e4a664dd1d2862c20c8d4d\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Apr 17 23:37:31.311675 kubelet[2829]: I0417 23:37:31.311494 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c32b952f79e4a664dd1d2862c20c8d4d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"c32b952f79e4a664dd1d2862c20c8d4d\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Apr 17 23:37:31.311675 kubelet[2829]: I0417 23:37:31.311519 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c32b952f79e4a664dd1d2862c20c8d4d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"c32b952f79e4a664dd1d2862c20c8d4d\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Apr 17 23:37:31.311870 kubelet[2829]: I0417 23:37:31.311544 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a648033ad4ca395f8bc8fe85e7f5a5c9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-109\" (UID: \"a648033ad4ca395f8bc8fe85e7f5a5c9\") " pod="kube-system/kube-apiserver-ip-172-31-16-109" Apr 17 23:37:31.311870 kubelet[2829]: I0417 23:37:31.311571 2829 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c32b952f79e4a664dd1d2862c20c8d4d-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"c32b952f79e4a664dd1d2862c20c8d4d\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Apr 17 23:37:31.316860 kubelet[2829]: E0417 23:37:31.316819 2829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-109?timeout=10s\": dial tcp 172.31.16.109:6443: connect: connection refused" interval="400ms" Apr 17 23:37:31.497979 kubelet[2829]: I0417 23:37:31.497943 2829 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-109" Apr 17 23:37:31.498368 kubelet[2829]: E0417 23:37:31.498331 2829 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.109:6443/api/v1/nodes\": dial tcp 172.31.16.109:6443: connect: connection refused" node="ip-172-31-16-109" Apr 17 23:37:31.576158 containerd[1982]: time="2026-04-17T23:37:31.576037685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-109,Uid:a648033ad4ca395f8bc8fe85e7f5a5c9,Namespace:kube-system,Attempt:0,}" Apr 17 23:37:31.600539 containerd[1982]: time="2026-04-17T23:37:31.600495131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-109,Uid:f4941e5e4ff7b2332301ce3985157b03,Namespace:kube-system,Attempt:0,}" Apr 17 23:37:31.601005 containerd[1982]: time="2026-04-17T23:37:31.600495505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-109,Uid:c32b952f79e4a664dd1d2862c20c8d4d,Namespace:kube-system,Attempt:0,}" Apr 17 23:37:31.717678 kubelet[2829]: E0417 23:37:31.717633 2829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-109?timeout=10s\": dial tcp 172.31.16.109:6443: connect: connection refused" interval="800ms" Apr 17 23:37:31.900627 kubelet[2829]: I0417 23:37:31.900593 2829 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-109" Apr 17 23:37:31.901249 kubelet[2829]: E0417 23:37:31.901001 2829 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.109:6443/api/v1/nodes\": dial tcp 172.31.16.109:6443: connect: connection refused" node="ip-172-31-16-109" Apr 17 23:37:31.960389 kubelet[2829]: E0417 23:37:31.960344 2829 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.16.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.16.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 17 23:37:32.090179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2711373085.mount: Deactivated successfully. Apr 17 23:37:32.107850 containerd[1982]: time="2026-04-17T23:37:32.107792246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:37:32.110086 containerd[1982]: time="2026-04-17T23:37:32.110025341Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 17 23:37:32.111982 containerd[1982]: time="2026-04-17T23:37:32.111929091Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:37:32.113923 containerd[1982]: time="2026-04-17T23:37:32.113883436Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:37:32.115947 containerd[1982]: time="2026-04-17T23:37:32.115902476Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:37:32.118186 containerd[1982]: time="2026-04-17T23:37:32.118144013Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:37:32.119687 containerd[1982]: time="2026-04-17T23:37:32.119631858Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 17 23:37:32.123507 containerd[1982]: time="2026-04-17T23:37:32.123444374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 17 23:37:32.124489 containerd[1982]: time="2026-04-17T23:37:32.124250278Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 523.373479ms" Apr 17 23:37:32.125545 containerd[1982]: time="2026-04-17T23:37:32.125508995Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 549.386267ms" Apr 17 23:37:32.131364 containerd[1982]: time="2026-04-17T23:37:32.131318268Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 530.52186ms" Apr 17 23:37:32.428064 containerd[1982]: time="2026-04-17T23:37:32.419790362Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:37:32.428064 containerd[1982]: time="2026-04-17T23:37:32.419859795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:37:32.428064 containerd[1982]: time="2026-04-17T23:37:32.419887324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:32.428064 containerd[1982]: time="2026-04-17T23:37:32.420013434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:32.430642 containerd[1982]: time="2026-04-17T23:37:32.430281793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:37:32.430642 containerd[1982]: time="2026-04-17T23:37:32.430360619Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:37:32.430642 containerd[1982]: time="2026-04-17T23:37:32.430392768Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:32.430642 containerd[1982]: time="2026-04-17T23:37:32.430522035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:32.431708 containerd[1982]: time="2026-04-17T23:37:32.431384576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:37:32.431708 containerd[1982]: time="2026-04-17T23:37:32.431445410Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:37:32.431708 containerd[1982]: time="2026-04-17T23:37:32.431486997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:32.432663 containerd[1982]: time="2026-04-17T23:37:32.431637623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:32.466909 systemd[1]: Started cri-containerd-7210b6e6e3fa14de2fd7649a8af04ee3725082ebb300d19b315d127011f9487a.scope - libcontainer container 7210b6e6e3fa14de2fd7649a8af04ee3725082ebb300d19b315d127011f9487a. Apr 17 23:37:32.475309 systemd[1]: Started cri-containerd-0c93f5b772d442382b948af32027c33a193fd486c9615d72f5b3eb6c1bcf8e6d.scope - libcontainer container 0c93f5b772d442382b948af32027c33a193fd486c9615d72f5b3eb6c1bcf8e6d. Apr 17 23:37:32.492919 systemd[1]: Started cri-containerd-8b58a9be135e2ed44d20453d35f546f3529c1f662c197204a008cd6de5c9bf2f.scope - libcontainer container 8b58a9be135e2ed44d20453d35f546f3529c1f662c197204a008cd6de5c9bf2f. Apr 17 23:37:32.519987 kubelet[2829]: E0417 23:37:32.519944 2829 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-109?timeout=10s\": dial tcp 172.31.16.109:6443: connect: connection refused" interval="1.6s" Apr 17 23:37:32.556729 kubelet[2829]: E0417 23:37:32.556193 2829 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.16.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.16.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 17 23:37:32.567034 containerd[1982]: time="2026-04-17T23:37:32.566993839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-109,Uid:f4941e5e4ff7b2332301ce3985157b03,Namespace:kube-system,Attempt:0,} returns sandbox id \"7210b6e6e3fa14de2fd7649a8af04ee3725082ebb300d19b315d127011f9487a\"" Apr 17 23:37:32.587619 containerd[1982]: time="2026-04-17T23:37:32.587577624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-109,Uid:a648033ad4ca395f8bc8fe85e7f5a5c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b58a9be135e2ed44d20453d35f546f3529c1f662c197204a008cd6de5c9bf2f\"" Apr 17 23:37:32.589240 kubelet[2829]: E0417 23:37:32.588721 2829 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.16.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.16.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 17 23:37:32.592252 containerd[1982]: time="2026-04-17T23:37:32.592222508Z" level=info msg="CreateContainer within sandbox \"7210b6e6e3fa14de2fd7649a8af04ee3725082ebb300d19b315d127011f9487a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 17 23:37:32.597422 containerd[1982]: time="2026-04-17T23:37:32.597384721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-109,Uid:c32b952f79e4a664dd1d2862c20c8d4d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c93f5b772d442382b948af32027c33a193fd486c9615d72f5b3eb6c1bcf8e6d\"" Apr 17 23:37:32.599779 containerd[1982]: time="2026-04-17T23:37:32.599574366Z" level=info msg="CreateContainer within sandbox \"8b58a9be135e2ed44d20453d35f546f3529c1f662c197204a008cd6de5c9bf2f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 17 23:37:32.604871 containerd[1982]: time="2026-04-17T23:37:32.604675358Z" level=info msg="CreateContainer within sandbox \"0c93f5b772d442382b948af32027c33a193fd486c9615d72f5b3eb6c1bcf8e6d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 17 23:37:32.646760 containerd[1982]: time="2026-04-17T23:37:32.646660180Z" level=info msg="CreateContainer within sandbox \"7210b6e6e3fa14de2fd7649a8af04ee3725082ebb300d19b315d127011f9487a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"33b291eacbf952f10607991784188d4e011247c37b732911cf7e5d79d3cbcfc3\"" Apr 17 23:37:32.647411 containerd[1982]: time="2026-04-17T23:37:32.647374294Z" level=info msg="StartContainer for \"33b291eacbf952f10607991784188d4e011247c37b732911cf7e5d79d3cbcfc3\"" Apr 17 23:37:32.649606 containerd[1982]: time="2026-04-17T23:37:32.649434528Z" level=info msg="CreateContainer within sandbox \"8b58a9be135e2ed44d20453d35f546f3529c1f662c197204a008cd6de5c9bf2f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"31abc6135c0776901cf282da5304743b5ac0ead8ed3f209c1df2819a0771c0c7\"" Apr 17 23:37:32.655182 containerd[1982]: time="2026-04-17T23:37:32.654957430Z" level=info msg="CreateContainer within sandbox \"0c93f5b772d442382b948af32027c33a193fd486c9615d72f5b3eb6c1bcf8e6d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d2ab2507103fd1b2cc756b30746cf6ee037f3f443144e176bf9426af012efb62\"" Apr 17 23:37:32.655867 containerd[1982]: time="2026-04-17T23:37:32.655829096Z" level=info msg="StartContainer for \"d2ab2507103fd1b2cc756b30746cf6ee037f3f443144e176bf9426af012efb62\"" Apr 17 23:37:32.659721 containerd[1982]: time="2026-04-17T23:37:32.659663553Z" level=info msg="StartContainer for \"31abc6135c0776901cf282da5304743b5ac0ead8ed3f209c1df2819a0771c0c7\"" Apr 17 23:37:32.688061 kubelet[2829]: E0417 23:37:32.687940 2829 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.16.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-109&limit=500&resourceVersion=0\": dial tcp 172.31.16.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 17 23:37:32.692913 systemd[1]: Started cri-containerd-33b291eacbf952f10607991784188d4e011247c37b732911cf7e5d79d3cbcfc3.scope - libcontainer container 33b291eacbf952f10607991784188d4e011247c37b732911cf7e5d79d3cbcfc3. Apr 17 23:37:32.705985 kubelet[2829]: I0417 23:37:32.705952 2829 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-109" Apr 17 23:37:32.708255 kubelet[2829]: E0417 23:37:32.708207 2829 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.16.109:6443/api/v1/nodes\": dial tcp 172.31.16.109:6443: connect: connection refused" node="ip-172-31-16-109" Apr 17 23:37:32.716936 systemd[1]: Started cri-containerd-31abc6135c0776901cf282da5304743b5ac0ead8ed3f209c1df2819a0771c0c7.scope - libcontainer container 31abc6135c0776901cf282da5304743b5ac0ead8ed3f209c1df2819a0771c0c7. Apr 17 23:37:32.722089 systemd[1]: Started cri-containerd-d2ab2507103fd1b2cc756b30746cf6ee037f3f443144e176bf9426af012efb62.scope - libcontainer container d2ab2507103fd1b2cc756b30746cf6ee037f3f443144e176bf9426af012efb62. Apr 17 23:37:32.791682 containerd[1982]: time="2026-04-17T23:37:32.791633959Z" level=info msg="StartContainer for \"31abc6135c0776901cf282da5304743b5ac0ead8ed3f209c1df2819a0771c0c7\" returns successfully" Apr 17 23:37:32.806158 containerd[1982]: time="2026-04-17T23:37:32.806085282Z" level=info msg="StartContainer for \"33b291eacbf952f10607991784188d4e011247c37b732911cf7e5d79d3cbcfc3\" returns successfully" Apr 17 23:37:32.834706 containerd[1982]: time="2026-04-17T23:37:32.834189163Z" level=info msg="StartContainer for \"d2ab2507103fd1b2cc756b30746cf6ee037f3f443144e176bf9426af012efb62\" returns successfully" Apr 17 23:37:33.161383 kubelet[2829]: E0417 23:37:33.161349 2829 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-109\" not found" node="ip-172-31-16-109" Apr 17 23:37:33.162733 kubelet[2829]: E0417 23:37:33.161641 2829 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.16.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.16.109:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 17 23:37:33.168399 kubelet[2829]: E0417 23:37:33.168366 2829 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-109\" not found" node="ip-172-31-16-109" Apr 17 23:37:33.169996 kubelet[2829]: E0417 23:37:33.169966 2829 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-109\" not found" node="ip-172-31-16-109" Apr 17 23:37:34.171048 kubelet[2829]: E0417 23:37:34.171009 2829 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-109\" not found" node="ip-172-31-16-109" Apr 17 23:37:34.171491 kubelet[2829]: E0417 23:37:34.171453 2829 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-16-109\" not found" node="ip-172-31-16-109" Apr 17 23:37:34.310728 kubelet[2829]: I0417 23:37:34.310342 2829 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-109" Apr 17 23:37:34.835527 kubelet[2829]: E0417 23:37:34.835489 2829 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-109\" not found" node="ip-172-31-16-109" Apr 17 23:37:34.917334 kubelet[2829]: E0417 23:37:34.917229 2829 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-16-109.18a74930990c2398 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-109,UID:ip-172-31-16-109,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-109,},FirstTimestamp:2026-04-17 23:37:31.095917464 +0000 UTC m=+0.989384406,LastTimestamp:2026-04-17 23:37:31.095917464 +0000 UTC m=+0.989384406,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-109,}" Apr 17 23:37:34.989427 kubelet[2829]: E0417 23:37:34.989312 2829 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-16-109.18a749309c2160ef default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-109,UID:ip-172-31-16-109,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ip-172-31-16-109 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ip-172-31-16-109,},FirstTimestamp:2026-04-17 23:37:31.147641071 +0000 UTC m=+1.041108017,LastTimestamp:2026-04-17 23:37:31.147641071 +0000 UTC m=+1.041108017,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-109,}" Apr 17 23:37:35.022464 kubelet[2829]: I0417 23:37:35.021781 2829 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-109" Apr 17 23:37:35.022464 kubelet[2829]: E0417 23:37:35.021824 2829 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-16-109\": node \"ip-172-31-16-109\" not found" Apr 17 23:37:35.084667 kubelet[2829]: I0417 23:37:35.084283 2829 apiserver.go:52] "Watching apiserver" Apr 17 23:37:35.110716 kubelet[2829]: I0417 23:37:35.110450 2829 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:37:35.110716 kubelet[2829]: I0417 23:37:35.110505 2829 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-109" Apr 17 23:37:35.123718 kubelet[2829]: E0417 23:37:35.122915 2829 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-16-109\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-16-109" Apr 17 23:37:35.123718 kubelet[2829]: I0417 23:37:35.122954 2829 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-109" Apr 17 23:37:35.126997 kubelet[2829]: E0417 23:37:35.126843 2829 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-109\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-109" Apr 17 23:37:35.126997 kubelet[2829]: I0417 23:37:35.126876 2829 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-109" Apr 17 23:37:35.132725 kubelet[2829]: E0417 23:37:35.131793 2829 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-109\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-109" Apr 17 23:37:35.177221 kubelet[2829]: I0417 23:37:35.177186 2829 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-109" Apr 17 23:37:35.179545 kubelet[2829]: I0417 23:37:35.179342 2829 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-109" Apr 17 23:37:35.188969 kubelet[2829]: E0417 23:37:35.188931 2829 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-16-109\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-16-109" Apr 17 23:37:35.189400 kubelet[2829]: E0417 23:37:35.189372 2829 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-109\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-16-109" Apr 17 23:37:37.099589 systemd[1]: Reloading requested from client PID 3113 ('systemctl') (unit session-7.scope)... Apr 17 23:37:37.099609 systemd[1]: Reloading... Apr 17 23:37:37.230727 zram_generator::config[3156]: No configuration found. Apr 17 23:37:37.347001 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 17 23:37:37.457320 systemd[1]: Reloading finished in 357 ms. Apr 17 23:37:37.502126 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:37:37.521767 systemd[1]: kubelet.service: Deactivated successfully. Apr 17 23:37:37.522026 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:37:37.522090 systemd[1]: kubelet.service: Consumed 1.393s CPU time, 131.2M memory peak, 0B memory swap peak. Apr 17 23:37:37.528207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 17 23:37:37.808073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 17 23:37:37.820233 (kubelet)[3213]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 17 23:37:37.895230 kubelet[3213]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:37:37.895230 kubelet[3213]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 17 23:37:37.895230 kubelet[3213]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 17 23:37:37.895729 kubelet[3213]: I0417 23:37:37.895292 3213 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 17 23:37:37.907149 kubelet[3213]: I0417 23:37:37.907119 3213 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 17 23:37:37.908598 kubelet[3213]: I0417 23:37:37.907281 3213 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 17 23:37:37.908598 kubelet[3213]: I0417 23:37:37.907482 3213 server.go:956] "Client rotation is on, will bootstrap in background" Apr 17 23:37:37.908598 kubelet[3213]: I0417 23:37:37.908589 3213 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 17 23:37:37.911892 kubelet[3213]: I0417 23:37:37.911721 3213 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 17 23:37:37.915515 kubelet[3213]: E0417 23:37:37.915490 3213 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 17 23:37:37.915607 kubelet[3213]: I0417 23:37:37.915599 3213 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 17 23:37:37.918873 kubelet[3213]: I0417 23:37:37.918849 3213 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 17 23:37:37.919708 kubelet[3213]: I0417 23:37:37.919103 3213 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 17 23:37:37.919708 kubelet[3213]: I0417 23:37:37.919151 3213 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-109","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 17 23:37:37.919708 kubelet[3213]: I0417 23:37:37.919563 3213 topology_manager.go:138] "Creating topology manager with none policy" Apr 17 23:37:37.919708 kubelet[3213]: I0417 23:37:37.919578 3213 container_manager_linux.go:303] "Creating device plugin manager" Apr 17 23:37:37.919708 kubelet[3213]: I0417 23:37:37.919640 3213 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:37:37.920052 kubelet[3213]: I0417 23:37:37.919877 3213 kubelet.go:480] "Attempting to sync node with API server" Apr 17 23:37:37.920052 kubelet[3213]: I0417 23:37:37.919898 3213 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 17 23:37:37.920052 kubelet[3213]: I0417 23:37:37.919933 3213 kubelet.go:386] "Adding apiserver pod source" Apr 17 23:37:37.920052 kubelet[3213]: I0417 23:37:37.919953 3213 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 17 23:37:37.925897 kubelet[3213]: I0417 23:37:37.925869 3213 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 17 23:37:37.926738 kubelet[3213]: I0417 23:37:37.926721 3213 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 17 23:37:37.932850 kubelet[3213]: I0417 23:37:37.932833 3213 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 17 23:37:37.934709 kubelet[3213]: I0417 23:37:37.932992 3213 server.go:1289] "Started kubelet" Apr 17 23:37:37.937334 kubelet[3213]: I0417 23:37:37.937280 3213 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 17 23:37:37.937605 kubelet[3213]: I0417 23:37:37.937581 3213 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 17 23:37:37.941722 kubelet[3213]: I0417 23:37:37.941074 3213 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 17 23:37:37.959894 kubelet[3213]: I0417 23:37:37.958139 3213 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 17 23:37:37.960084 kubelet[3213]: I0417 23:37:37.960058 3213 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 17 23:37:37.963814 kubelet[3213]: I0417 23:37:37.961368 3213 server.go:317] "Adding debug handlers to kubelet server" Apr 17 23:37:37.965686 kubelet[3213]: I0417 23:37:37.965294 3213 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 17 23:37:37.968054 kubelet[3213]: I0417 23:37:37.968031 3213 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 17 23:37:37.969842 kubelet[3213]: I0417 23:37:37.969821 3213 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 17 23:37:37.970046 kubelet[3213]: I0417 23:37:37.970034 3213 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 17 23:37:37.970407 kubelet[3213]: I0417 23:37:37.970141 3213 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 17 23:37:37.970407 kubelet[3213]: I0417 23:37:37.970153 3213 kubelet.go:2436] "Starting kubelet main sync loop" Apr 17 23:37:37.970407 kubelet[3213]: E0417 23:37:37.970199 3213 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 17 23:37:37.971034 kubelet[3213]: I0417 23:37:37.971016 3213 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 17 23:37:37.971274 kubelet[3213]: I0417 23:37:37.971261 3213 reconciler.go:26] "Reconciler: start to sync state" Apr 17 23:37:37.975801 kubelet[3213]: I0417 23:37:37.975779 3213 factory.go:223] Registration of the systemd container factory successfully Apr 17 23:37:37.977712 kubelet[3213]: I0417 23:37:37.976019 3213 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 17 23:37:37.977924 kubelet[3213]: E0417 23:37:37.977904 3213 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 17 23:37:37.982205 kubelet[3213]: I0417 23:37:37.982162 3213 factory.go:223] Registration of the containerd container factory successfully Apr 17 23:37:38.038952 kubelet[3213]: I0417 23:37:38.038924 3213 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 17 23:37:38.038952 kubelet[3213]: I0417 23:37:38.038942 3213 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 17 23:37:38.038952 kubelet[3213]: I0417 23:37:38.038963 3213 state_mem.go:36] "Initialized new in-memory state store" Apr 17 23:37:38.039171 kubelet[3213]: I0417 23:37:38.039118 3213 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 17 23:37:38.039171 kubelet[3213]: I0417 23:37:38.039131 3213 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 17 23:37:38.039171 kubelet[3213]: I0417 23:37:38.039155 3213 policy_none.go:49] "None policy: Start" Apr 17 23:37:38.039171 kubelet[3213]: I0417 23:37:38.039167 3213 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 17 23:37:38.039336 kubelet[3213]: I0417 23:37:38.039180 3213 state_mem.go:35] "Initializing new in-memory state store" Apr 17 23:37:38.039336 kubelet[3213]: I0417 23:37:38.039296 3213 state_mem.go:75] "Updated machine memory state" Apr 17 23:37:38.043492 kubelet[3213]: E0417 23:37:38.043460 3213 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 17 23:37:38.043667 kubelet[3213]: I0417 23:37:38.043648 3213 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 17 23:37:38.043761 kubelet[3213]: I0417 23:37:38.043666 3213 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 17 23:37:38.044279 kubelet[3213]: I0417 23:37:38.044008 3213 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 17 23:37:38.046728 kubelet[3213]: E0417 23:37:38.046397 3213 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 17 23:37:38.073431 kubelet[3213]: I0417 23:37:38.070984 3213 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-109" Apr 17 23:37:38.073754 kubelet[3213]: I0417 23:37:38.071009 3213 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-16-109" Apr 17 23:37:38.075617 kubelet[3213]: I0417 23:37:38.071186 3213 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-16-109" Apr 17 23:37:38.075770 kubelet[3213]: I0417 23:37:38.071422 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a648033ad4ca395f8bc8fe85e7f5a5c9-ca-certs\") pod \"kube-apiserver-ip-172-31-16-109\" (UID: \"a648033ad4ca395f8bc8fe85e7f5a5c9\") " pod="kube-system/kube-apiserver-ip-172-31-16-109" Apr 17 23:37:38.075822 kubelet[3213]: I0417 23:37:38.075799 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a648033ad4ca395f8bc8fe85e7f5a5c9-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-109\" (UID: \"a648033ad4ca395f8bc8fe85e7f5a5c9\") " pod="kube-system/kube-apiserver-ip-172-31-16-109" Apr 17 23:37:38.075884 kubelet[3213]: I0417 23:37:38.075829 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c32b952f79e4a664dd1d2862c20c8d4d-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"c32b952f79e4a664dd1d2862c20c8d4d\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Apr 17 23:37:38.075884 kubelet[3213]: I0417 23:37:38.075859 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c32b952f79e4a664dd1d2862c20c8d4d-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"c32b952f79e4a664dd1d2862c20c8d4d\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Apr 17 23:37:38.075969 kubelet[3213]: I0417 23:37:38.075885 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c32b952f79e4a664dd1d2862c20c8d4d-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"c32b952f79e4a664dd1d2862c20c8d4d\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Apr 17 23:37:38.075969 kubelet[3213]: I0417 23:37:38.075912 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c32b952f79e4a664dd1d2862c20c8d4d-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"c32b952f79e4a664dd1d2862c20c8d4d\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Apr 17 23:37:38.075969 kubelet[3213]: I0417 23:37:38.075938 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f4941e5e4ff7b2332301ce3985157b03-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-109\" (UID: \"f4941e5e4ff7b2332301ce3985157b03\") " pod="kube-system/kube-scheduler-ip-172-31-16-109" Apr 17 23:37:38.075969 kubelet[3213]: I0417 23:37:38.075961 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a648033ad4ca395f8bc8fe85e7f5a5c9-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-109\" (UID: \"a648033ad4ca395f8bc8fe85e7f5a5c9\") " pod="kube-system/kube-apiserver-ip-172-31-16-109" Apr 17 23:37:38.076135 kubelet[3213]: I0417 23:37:38.075996 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c32b952f79e4a664dd1d2862c20c8d4d-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-109\" (UID: \"c32b952f79e4a664dd1d2862c20c8d4d\") " pod="kube-system/kube-controller-manager-ip-172-31-16-109" Apr 17 23:37:38.153584 kubelet[3213]: I0417 23:37:38.153323 3213 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-16-109" Apr 17 23:37:38.164590 kubelet[3213]: I0417 23:37:38.163815 3213 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-16-109" Apr 17 23:37:38.164590 kubelet[3213]: I0417 23:37:38.163903 3213 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-16-109" Apr 17 23:37:38.839550 update_engine[1961]: I20260417 23:37:38.838735 1961 update_attempter.cc:509] Updating boot flags... Apr 17 23:37:38.929845 kubelet[3213]: I0417 23:37:38.929738 3213 apiserver.go:52] "Watching apiserver" Apr 17 23:37:38.972493 kubelet[3213]: I0417 23:37:38.972051 3213 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 17 23:37:38.984721 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 32 scanned by (udev-worker) (3270) Apr 17 23:37:39.017727 kubelet[3213]: I0417 23:37:39.016880 3213 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-16-109" Apr 17 23:37:39.042148 kubelet[3213]: E0417 23:37:39.039562 3213 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-16-109\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-109" Apr 17 23:37:39.210806 kubelet[3213]: I0417 23:37:39.208533 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-109" podStartSLOduration=1.208513569 podStartE2EDuration="1.208513569s" podCreationTimestamp="2026-04-17 23:37:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:37:39.185343729 +0000 UTC m=+1.357656361" watchObservedRunningTime="2026-04-17 23:37:39.208513569 +0000 UTC m=+1.380826201" Apr 17 23:37:39.244391 kubelet[3213]: I0417 23:37:39.244331 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-109" podStartSLOduration=1.244310771 podStartE2EDuration="1.244310771s" podCreationTimestamp="2026-04-17 23:37:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:37:39.237710204 +0000 UTC m=+1.410022827" watchObservedRunningTime="2026-04-17 23:37:39.244310771 +0000 UTC m=+1.416623394" Apr 17 23:37:39.244571 kubelet[3213]: I0417 23:37:39.244457 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-109" podStartSLOduration=1.244450226 podStartE2EDuration="1.244450226s" podCreationTimestamp="2026-04-17 23:37:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:37:39.211276118 +0000 UTC m=+1.383588750" watchObservedRunningTime="2026-04-17 23:37:39.244450226 +0000 UTC m=+1.416762858" Apr 17 23:37:43.567545 kubelet[3213]: I0417 23:37:43.567491 3213 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 17 23:37:43.568320 containerd[1982]: time="2026-04-17T23:37:43.567855148Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 17 23:37:43.568708 kubelet[3213]: I0417 23:37:43.568668 3213 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 17 23:37:44.543965 systemd[1]: Created slice kubepods-besteffort-pod974e7db0_f0bb_414c_bc8c_95bbdc7a5544.slice - libcontainer container kubepods-besteffort-pod974e7db0_f0bb_414c_bc8c_95bbdc7a5544.slice. Apr 17 23:37:44.724510 kubelet[3213]: I0417 23:37:44.724413 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/974e7db0-f0bb-414c-bc8c-95bbdc7a5544-kube-proxy\") pod \"kube-proxy-ns4rf\" (UID: \"974e7db0-f0bb-414c-bc8c-95bbdc7a5544\") " pod="kube-system/kube-proxy-ns4rf" Apr 17 23:37:44.724510 kubelet[3213]: I0417 23:37:44.724464 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/974e7db0-f0bb-414c-bc8c-95bbdc7a5544-xtables-lock\") pod \"kube-proxy-ns4rf\" (UID: \"974e7db0-f0bb-414c-bc8c-95bbdc7a5544\") " pod="kube-system/kube-proxy-ns4rf" Apr 17 23:37:44.725100 kubelet[3213]: I0417 23:37:44.724527 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/974e7db0-f0bb-414c-bc8c-95bbdc7a5544-lib-modules\") pod \"kube-proxy-ns4rf\" (UID: \"974e7db0-f0bb-414c-bc8c-95bbdc7a5544\") " pod="kube-system/kube-proxy-ns4rf" Apr 17 23:37:44.725100 kubelet[3213]: I0417 23:37:44.724558 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g67d8\" (UniqueName: \"kubernetes.io/projected/974e7db0-f0bb-414c-bc8c-95bbdc7a5544-kube-api-access-g67d8\") pod \"kube-proxy-ns4rf\" (UID: \"974e7db0-f0bb-414c-bc8c-95bbdc7a5544\") " pod="kube-system/kube-proxy-ns4rf" Apr 17 23:37:44.823323 systemd[1]: Created slice kubepods-besteffort-pod4e84d916_11fd_491d_937c_6f44f7271a94.slice - libcontainer container kubepods-besteffort-pod4e84d916_11fd_491d_937c_6f44f7271a94.slice. Apr 17 23:37:44.873729 containerd[1982]: time="2026-04-17T23:37:44.873645636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ns4rf,Uid:974e7db0-f0bb-414c-bc8c-95bbdc7a5544,Namespace:kube-system,Attempt:0,}" Apr 17 23:37:44.919272 containerd[1982]: time="2026-04-17T23:37:44.918808325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:37:44.919477 containerd[1982]: time="2026-04-17T23:37:44.919118731Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:37:44.919608 containerd[1982]: time="2026-04-17T23:37:44.919574223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:44.920516 containerd[1982]: time="2026-04-17T23:37:44.919855858Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:44.934194 kubelet[3213]: I0417 23:37:44.934050 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4e84d916-11fd-491d-937c-6f44f7271a94-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-smlt5\" (UID: \"4e84d916-11fd-491d-937c-6f44f7271a94\") " pod="tigera-operator/tigera-operator-6bf85f8dd-smlt5" Apr 17 23:37:44.934558 kubelet[3213]: I0417 23:37:44.934385 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmv2l\" (UniqueName: \"kubernetes.io/projected/4e84d916-11fd-491d-937c-6f44f7271a94-kube-api-access-jmv2l\") pod \"tigera-operator-6bf85f8dd-smlt5\" (UID: \"4e84d916-11fd-491d-937c-6f44f7271a94\") " pod="tigera-operator/tigera-operator-6bf85f8dd-smlt5" Apr 17 23:37:44.945979 systemd[1]: run-containerd-runc-k8s.io-d71fe3b6e1e1f38df418ed3ee556fc7fef838ac8e37470f3d585aa068f494603-runc.YhyKYh.mount: Deactivated successfully. Apr 17 23:37:44.954910 systemd[1]: Started cri-containerd-d71fe3b6e1e1f38df418ed3ee556fc7fef838ac8e37470f3d585aa068f494603.scope - libcontainer container d71fe3b6e1e1f38df418ed3ee556fc7fef838ac8e37470f3d585aa068f494603. Apr 17 23:37:44.981858 containerd[1982]: time="2026-04-17T23:37:44.981820219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ns4rf,Uid:974e7db0-f0bb-414c-bc8c-95bbdc7a5544,Namespace:kube-system,Attempt:0,} returns sandbox id \"d71fe3b6e1e1f38df418ed3ee556fc7fef838ac8e37470f3d585aa068f494603\"" Apr 17 23:37:44.993736 containerd[1982]: time="2026-04-17T23:37:44.993646726Z" level=info msg="CreateContainer within sandbox \"d71fe3b6e1e1f38df418ed3ee556fc7fef838ac8e37470f3d585aa068f494603\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 17 23:37:45.022427 containerd[1982]: time="2026-04-17T23:37:45.022381240Z" level=info msg="CreateContainer within sandbox \"d71fe3b6e1e1f38df418ed3ee556fc7fef838ac8e37470f3d585aa068f494603\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2f671e108260c12da0c3ba1dceae1f4e2de3e752abb32096f166f5d4908f975c\"" Apr 17 23:37:45.024386 containerd[1982]: time="2026-04-17T23:37:45.023231219Z" level=info msg="StartContainer for \"2f671e108260c12da0c3ba1dceae1f4e2de3e752abb32096f166f5d4908f975c\"" Apr 17 23:37:45.062961 systemd[1]: Started cri-containerd-2f671e108260c12da0c3ba1dceae1f4e2de3e752abb32096f166f5d4908f975c.scope - libcontainer container 2f671e108260c12da0c3ba1dceae1f4e2de3e752abb32096f166f5d4908f975c. Apr 17 23:37:45.096004 containerd[1982]: time="2026-04-17T23:37:45.095634127Z" level=info msg="StartContainer for \"2f671e108260c12da0c3ba1dceae1f4e2de3e752abb32096f166f5d4908f975c\" returns successfully" Apr 17 23:37:45.134043 containerd[1982]: time="2026-04-17T23:37:45.133579687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-smlt5,Uid:4e84d916-11fd-491d-937c-6f44f7271a94,Namespace:tigera-operator,Attempt:0,}" Apr 17 23:37:45.171188 containerd[1982]: time="2026-04-17T23:37:45.170682251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:37:45.171188 containerd[1982]: time="2026-04-17T23:37:45.170790507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:37:45.171188 containerd[1982]: time="2026-04-17T23:37:45.170822080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:45.171188 containerd[1982]: time="2026-04-17T23:37:45.170987206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:45.194933 systemd[1]: Started cri-containerd-d73d2f5beea2dc202d72ff135757dd95c86675589d2d2b2d429a8ada2044cc2b.scope - libcontainer container d73d2f5beea2dc202d72ff135757dd95c86675589d2d2b2d429a8ada2044cc2b. Apr 17 23:37:45.271709 containerd[1982]: time="2026-04-17T23:37:45.271349929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-smlt5,Uid:4e84d916-11fd-491d-937c-6f44f7271a94,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d73d2f5beea2dc202d72ff135757dd95c86675589d2d2b2d429a8ada2044cc2b\"" Apr 17 23:37:45.278639 containerd[1982]: time="2026-04-17T23:37:45.278592513Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 17 23:37:46.066030 kubelet[3213]: I0417 23:37:46.063892 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ns4rf" podStartSLOduration=2.061077306 podStartE2EDuration="2.061077306s" podCreationTimestamp="2026-04-17 23:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:37:46.059423096 +0000 UTC m=+8.231735728" watchObservedRunningTime="2026-04-17 23:37:46.061077306 +0000 UTC m=+8.233389939" Apr 17 23:37:46.730870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3790782835.mount: Deactivated successfully. Apr 17 23:37:48.797713 containerd[1982]: time="2026-04-17T23:37:48.797641790Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:48.800004 containerd[1982]: time="2026-04-17T23:37:48.799950412Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 17 23:37:48.802508 containerd[1982]: time="2026-04-17T23:37:48.802151636Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:48.805717 containerd[1982]: time="2026-04-17T23:37:48.805644548Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:37:48.806512 containerd[1982]: time="2026-04-17T23:37:48.806476494Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.527833401s" Apr 17 23:37:48.806588 containerd[1982]: time="2026-04-17T23:37:48.806519205Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 17 23:37:48.812890 containerd[1982]: time="2026-04-17T23:37:48.812852988Z" level=info msg="CreateContainer within sandbox \"d73d2f5beea2dc202d72ff135757dd95c86675589d2d2b2d429a8ada2044cc2b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 17 23:37:48.860464 containerd[1982]: time="2026-04-17T23:37:48.860411016Z" level=info msg="CreateContainer within sandbox \"d73d2f5beea2dc202d72ff135757dd95c86675589d2d2b2d429a8ada2044cc2b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5bca24549dfd5695a1909fad3cea80f7df881e947fe63616019c2ae5806be1c1\"" Apr 17 23:37:48.861370 containerd[1982]: time="2026-04-17T23:37:48.861270783Z" level=info msg="StartContainer for \"5bca24549dfd5695a1909fad3cea80f7df881e947fe63616019c2ae5806be1c1\"" Apr 17 23:37:48.901905 systemd[1]: Started cri-containerd-5bca24549dfd5695a1909fad3cea80f7df881e947fe63616019c2ae5806be1c1.scope - libcontainer container 5bca24549dfd5695a1909fad3cea80f7df881e947fe63616019c2ae5806be1c1. Apr 17 23:37:48.933638 containerd[1982]: time="2026-04-17T23:37:48.933469463Z" level=info msg="StartContainer for \"5bca24549dfd5695a1909fad3cea80f7df881e947fe63616019c2ae5806be1c1\" returns successfully" Apr 17 23:37:49.222090 kubelet[3213]: I0417 23:37:49.221986 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-smlt5" podStartSLOduration=1.689053076 podStartE2EDuration="5.221967062s" podCreationTimestamp="2026-04-17 23:37:44 +0000 UTC" firstStartedPulling="2026-04-17 23:37:45.274983267 +0000 UTC m=+7.447295878" lastFinishedPulling="2026-04-17 23:37:48.807897241 +0000 UTC m=+10.980209864" observedRunningTime="2026-04-17 23:37:49.057589358 +0000 UTC m=+11.229901989" watchObservedRunningTime="2026-04-17 23:37:49.221967062 +0000 UTC m=+11.394279694" Apr 17 23:37:55.901882 sudo[2315]: pam_unix(sudo:session): session closed for user root Apr 17 23:37:56.066643 sshd[2312]: pam_unix(sshd:session): session closed for user core Apr 17 23:37:56.070705 systemd-logind[1959]: Session 7 logged out. Waiting for processes to exit. Apr 17 23:37:56.072237 systemd[1]: sshd@6-172.31.16.109:22-20.229.252.112:53334.service: Deactivated successfully. Apr 17 23:37:56.077358 systemd[1]: session-7.scope: Deactivated successfully. Apr 17 23:37:56.078119 systemd[1]: session-7.scope: Consumed 6.586s CPU time, 146.5M memory peak, 0B memory swap peak. Apr 17 23:37:56.083816 systemd-logind[1959]: Removed session 7. Apr 17 23:37:57.265871 systemd[1]: Created slice kubepods-besteffort-pod8232a895_6753_4d9f_9394_b5ebfb5a79c4.slice - libcontainer container kubepods-besteffort-pod8232a895_6753_4d9f_9394_b5ebfb5a79c4.slice. Apr 17 23:37:57.326021 kubelet[3213]: I0417 23:37:57.325853 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8232a895-6753-4d9f-9394-b5ebfb5a79c4-tigera-ca-bundle\") pod \"calico-typha-595cc8447d-ln8pp\" (UID: \"8232a895-6753-4d9f-9394-b5ebfb5a79c4\") " pod="calico-system/calico-typha-595cc8447d-ln8pp" Apr 17 23:37:57.326021 kubelet[3213]: I0417 23:37:57.325905 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8232a895-6753-4d9f-9394-b5ebfb5a79c4-typha-certs\") pod \"calico-typha-595cc8447d-ln8pp\" (UID: \"8232a895-6753-4d9f-9394-b5ebfb5a79c4\") " pod="calico-system/calico-typha-595cc8447d-ln8pp" Apr 17 23:37:57.326021 kubelet[3213]: I0417 23:37:57.325935 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drclh\" (UniqueName: \"kubernetes.io/projected/8232a895-6753-4d9f-9394-b5ebfb5a79c4-kube-api-access-drclh\") pod \"calico-typha-595cc8447d-ln8pp\" (UID: \"8232a895-6753-4d9f-9394-b5ebfb5a79c4\") " pod="calico-system/calico-typha-595cc8447d-ln8pp" Apr 17 23:37:57.397292 systemd[1]: Created slice kubepods-besteffort-podeb7cdf61_8cf7_493c_89d5_2be3916f7a7e.slice - libcontainer container kubepods-besteffort-podeb7cdf61_8cf7_493c_89d5_2be3916f7a7e.slice. Apr 17 23:37:57.429714 kubelet[3213]: I0417 23:37:57.427174 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/eb7cdf61-8cf7-493c-89d5-2be3916f7a7e-cni-bin-dir\") pod \"calico-node-tvjrn\" (UID: \"eb7cdf61-8cf7-493c-89d5-2be3916f7a7e\") " pod="calico-system/calico-node-tvjrn" Apr 17 23:37:57.429714 kubelet[3213]: I0417 23:37:57.427225 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/eb7cdf61-8cf7-493c-89d5-2be3916f7a7e-nodeproc\") pod \"calico-node-tvjrn\" (UID: \"eb7cdf61-8cf7-493c-89d5-2be3916f7a7e\") " pod="calico-system/calico-node-tvjrn" Apr 17 23:37:57.429714 kubelet[3213]: I0417 23:37:57.427247 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/eb7cdf61-8cf7-493c-89d5-2be3916f7a7e-flexvol-driver-host\") pod \"calico-node-tvjrn\" (UID: \"eb7cdf61-8cf7-493c-89d5-2be3916f7a7e\") " pod="calico-system/calico-node-tvjrn" Apr 17 23:37:57.429714 kubelet[3213]: I0417 23:37:57.427272 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb7cdf61-8cf7-493c-89d5-2be3916f7a7e-tigera-ca-bundle\") pod \"calico-node-tvjrn\" (UID: \"eb7cdf61-8cf7-493c-89d5-2be3916f7a7e\") " pod="calico-system/calico-node-tvjrn" Apr 17 23:37:57.429714 kubelet[3213]: I0417 23:37:57.427295 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb7cdf61-8cf7-493c-89d5-2be3916f7a7e-xtables-lock\") pod \"calico-node-tvjrn\" (UID: \"eb7cdf61-8cf7-493c-89d5-2be3916f7a7e\") " pod="calico-system/calico-node-tvjrn" Apr 17 23:37:57.430040 kubelet[3213]: I0417 23:37:57.427318 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/eb7cdf61-8cf7-493c-89d5-2be3916f7a7e-bpffs\") pod \"calico-node-tvjrn\" (UID: \"eb7cdf61-8cf7-493c-89d5-2be3916f7a7e\") " pod="calico-system/calico-node-tvjrn" Apr 17 23:37:57.430040 kubelet[3213]: I0417 23:37:57.427339 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/eb7cdf61-8cf7-493c-89d5-2be3916f7a7e-cni-net-dir\") pod \"calico-node-tvjrn\" (UID: \"eb7cdf61-8cf7-493c-89d5-2be3916f7a7e\") " pod="calico-system/calico-node-tvjrn" Apr 17 23:37:57.430040 kubelet[3213]: I0417 23:37:57.427361 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb7cdf61-8cf7-493c-89d5-2be3916f7a7e-lib-modules\") pod \"calico-node-tvjrn\" (UID: \"eb7cdf61-8cf7-493c-89d5-2be3916f7a7e\") " pod="calico-system/calico-node-tvjrn" Apr 17 23:37:57.430040 kubelet[3213]: I0417 23:37:57.427384 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/eb7cdf61-8cf7-493c-89d5-2be3916f7a7e-policysync\") pod \"calico-node-tvjrn\" (UID: \"eb7cdf61-8cf7-493c-89d5-2be3916f7a7e\") " pod="calico-system/calico-node-tvjrn" Apr 17 23:37:57.430040 kubelet[3213]: I0417 23:37:57.427406 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/eb7cdf61-8cf7-493c-89d5-2be3916f7a7e-node-certs\") pod \"calico-node-tvjrn\" (UID: \"eb7cdf61-8cf7-493c-89d5-2be3916f7a7e\") " pod="calico-system/calico-node-tvjrn" Apr 17 23:37:57.430040 kubelet[3213]: I0417 23:37:57.427429 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/eb7cdf61-8cf7-493c-89d5-2be3916f7a7e-sys-fs\") pod \"calico-node-tvjrn\" (UID: \"eb7cdf61-8cf7-493c-89d5-2be3916f7a7e\") " pod="calico-system/calico-node-tvjrn" Apr 17 23:37:57.430283 kubelet[3213]: I0417 23:37:57.427450 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eb7cdf61-8cf7-493c-89d5-2be3916f7a7e-var-lib-calico\") pod \"calico-node-tvjrn\" (UID: \"eb7cdf61-8cf7-493c-89d5-2be3916f7a7e\") " pod="calico-system/calico-node-tvjrn" Apr 17 23:37:57.430283 kubelet[3213]: I0417 23:37:57.427475 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whsvj\" (UniqueName: \"kubernetes.io/projected/eb7cdf61-8cf7-493c-89d5-2be3916f7a7e-kube-api-access-whsvj\") pod \"calico-node-tvjrn\" (UID: \"eb7cdf61-8cf7-493c-89d5-2be3916f7a7e\") " pod="calico-system/calico-node-tvjrn" Apr 17 23:37:57.430283 kubelet[3213]: I0417 23:37:57.427512 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/eb7cdf61-8cf7-493c-89d5-2be3916f7a7e-cni-log-dir\") pod \"calico-node-tvjrn\" (UID: \"eb7cdf61-8cf7-493c-89d5-2be3916f7a7e\") " pod="calico-system/calico-node-tvjrn" Apr 17 23:37:57.430283 kubelet[3213]: I0417 23:37:57.427539 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/eb7cdf61-8cf7-493c-89d5-2be3916f7a7e-var-run-calico\") pod \"calico-node-tvjrn\" (UID: \"eb7cdf61-8cf7-493c-89d5-2be3916f7a7e\") " pod="calico-system/calico-node-tvjrn" Apr 17 23:37:57.562721 kubelet[3213]: E0417 23:37:57.562081 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.562721 kubelet[3213]: W0417 23:37:57.562106 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.562721 kubelet[3213]: E0417 23:37:57.562129 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.562721 kubelet[3213]: E0417 23:37:57.562292 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v54pl" podUID="36a96baf-e8bf-431a-9ddf-e9ecc28a4802" Apr 17 23:37:57.588933 containerd[1982]: time="2026-04-17T23:37:57.588885184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-595cc8447d-ln8pp,Uid:8232a895-6753-4d9f-9394-b5ebfb5a79c4,Namespace:calico-system,Attempt:0,}" Apr 17 23:37:57.620661 kubelet[3213]: E0417 23:37:57.618537 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.620661 kubelet[3213]: W0417 23:37:57.620463 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.620661 kubelet[3213]: E0417 23:37:57.620509 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.626512 kubelet[3213]: E0417 23:37:57.626483 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.626512 kubelet[3213]: W0417 23:37:57.626509 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.626733 kubelet[3213]: E0417 23:37:57.626534 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.627116 kubelet[3213]: E0417 23:37:57.626844 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.627116 kubelet[3213]: W0417 23:37:57.626862 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.627116 kubelet[3213]: E0417 23:37:57.626880 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.627277 kubelet[3213]: E0417 23:37:57.627236 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.627277 kubelet[3213]: W0417 23:37:57.627247 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.627277 kubelet[3213]: E0417 23:37:57.627263 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.629899 kubelet[3213]: E0417 23:37:57.629878 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.629899 kubelet[3213]: W0417 23:37:57.629897 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.630043 kubelet[3213]: E0417 23:37:57.629912 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.631820 kubelet[3213]: E0417 23:37:57.631784 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.631820 kubelet[3213]: W0417 23:37:57.631800 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.631820 kubelet[3213]: E0417 23:37:57.631814 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.634261 kubelet[3213]: E0417 23:37:57.634241 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.634261 kubelet[3213]: W0417 23:37:57.634260 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.634402 kubelet[3213]: E0417 23:37:57.634274 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.634969 kubelet[3213]: E0417 23:37:57.634952 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.634969 kubelet[3213]: W0417 23:37:57.634969 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.635087 kubelet[3213]: E0417 23:37:57.634982 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.636712 kubelet[3213]: E0417 23:37:57.635287 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.636712 kubelet[3213]: W0417 23:37:57.635300 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.636712 kubelet[3213]: E0417 23:37:57.635312 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.636712 kubelet[3213]: E0417 23:37:57.635593 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.636712 kubelet[3213]: W0417 23:37:57.635603 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.636712 kubelet[3213]: E0417 23:37:57.635614 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.636712 kubelet[3213]: E0417 23:37:57.635843 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.636712 kubelet[3213]: W0417 23:37:57.635851 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.636712 kubelet[3213]: E0417 23:37:57.635862 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.638682 kubelet[3213]: E0417 23:37:57.638661 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.638682 kubelet[3213]: W0417 23:37:57.638680 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.638799 kubelet[3213]: E0417 23:37:57.638725 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.639396 kubelet[3213]: E0417 23:37:57.639375 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.639396 kubelet[3213]: W0417 23:37:57.639395 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.639526 kubelet[3213]: E0417 23:37:57.639408 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.639648 kubelet[3213]: E0417 23:37:57.639632 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.639734 kubelet[3213]: W0417 23:37:57.639649 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.639734 kubelet[3213]: E0417 23:37:57.639661 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.642810 kubelet[3213]: E0417 23:37:57.642788 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.642810 kubelet[3213]: W0417 23:37:57.642811 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.642970 kubelet[3213]: E0417 23:37:57.642826 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.643092 kubelet[3213]: E0417 23:37:57.643075 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.643144 kubelet[3213]: W0417 23:37:57.643092 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.643144 kubelet[3213]: E0417 23:37:57.643106 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.643608 kubelet[3213]: E0417 23:37:57.643590 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.643608 kubelet[3213]: W0417 23:37:57.643607 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.643744 kubelet[3213]: E0417 23:37:57.643621 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.644954 kubelet[3213]: E0417 23:37:57.644935 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.644954 kubelet[3213]: W0417 23:37:57.644953 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.645068 kubelet[3213]: E0417 23:37:57.644967 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.645203 kubelet[3213]: E0417 23:37:57.645188 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.645272 kubelet[3213]: W0417 23:37:57.645203 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.645272 kubelet[3213]: E0417 23:37:57.645215 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.645448 kubelet[3213]: E0417 23:37:57.645432 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.645495 kubelet[3213]: W0417 23:37:57.645448 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.645495 kubelet[3213]: E0417 23:37:57.645460 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.647968 kubelet[3213]: E0417 23:37:57.647947 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.647968 kubelet[3213]: W0417 23:37:57.647968 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.648095 kubelet[3213]: E0417 23:37:57.647982 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.648095 kubelet[3213]: I0417 23:37:57.648030 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/36a96baf-e8bf-431a-9ddf-e9ecc28a4802-socket-dir\") pod \"csi-node-driver-v54pl\" (UID: \"36a96baf-e8bf-431a-9ddf-e9ecc28a4802\") " pod="calico-system/csi-node-driver-v54pl" Apr 17 23:37:57.649089 kubelet[3213]: E0417 23:37:57.649071 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.649089 kubelet[3213]: W0417 23:37:57.649088 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.649230 kubelet[3213]: E0417 23:37:57.649102 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.649770 kubelet[3213]: E0417 23:37:57.649752 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.649770 kubelet[3213]: W0417 23:37:57.649769 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.649895 kubelet[3213]: E0417 23:37:57.649784 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.653585 kubelet[3213]: E0417 23:37:57.653470 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.653585 kubelet[3213]: W0417 23:37:57.653487 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.653585 kubelet[3213]: E0417 23:37:57.653502 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.653585 kubelet[3213]: I0417 23:37:57.653546 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/36a96baf-e8bf-431a-9ddf-e9ecc28a4802-registration-dir\") pod \"csi-node-driver-v54pl\" (UID: \"36a96baf-e8bf-431a-9ddf-e9ecc28a4802\") " pod="calico-system/csi-node-driver-v54pl" Apr 17 23:37:57.654771 kubelet[3213]: E0417 23:37:57.654343 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.654771 kubelet[3213]: W0417 23:37:57.654359 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.654771 kubelet[3213]: E0417 23:37:57.654374 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.655485 kubelet[3213]: E0417 23:37:57.655153 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.655485 kubelet[3213]: W0417 23:37:57.655167 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.655485 kubelet[3213]: E0417 23:37:57.655180 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.656024 kubelet[3213]: E0417 23:37:57.655879 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.656024 kubelet[3213]: W0417 23:37:57.655893 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.656024 kubelet[3213]: E0417 23:37:57.655906 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.656024 kubelet[3213]: I0417 23:37:57.655946 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/36a96baf-e8bf-431a-9ddf-e9ecc28a4802-kubelet-dir\") pod \"csi-node-driver-v54pl\" (UID: \"36a96baf-e8bf-431a-9ddf-e9ecc28a4802\") " pod="calico-system/csi-node-driver-v54pl" Apr 17 23:37:57.657302 kubelet[3213]: E0417 23:37:57.657076 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.657302 kubelet[3213]: W0417 23:37:57.657092 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.657302 kubelet[3213]: E0417 23:37:57.657105 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.657302 kubelet[3213]: I0417 23:37:57.657133 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/36a96baf-e8bf-431a-9ddf-e9ecc28a4802-varrun\") pod \"csi-node-driver-v54pl\" (UID: \"36a96baf-e8bf-431a-9ddf-e9ecc28a4802\") " pod="calico-system/csi-node-driver-v54pl" Apr 17 23:37:57.657904 kubelet[3213]: E0417 23:37:57.657609 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.657904 kubelet[3213]: W0417 23:37:57.657623 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.657904 kubelet[3213]: E0417 23:37:57.657635 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.657904 kubelet[3213]: I0417 23:37:57.657753 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9snf\" (UniqueName: \"kubernetes.io/projected/36a96baf-e8bf-431a-9ddf-e9ecc28a4802-kube-api-access-x9snf\") pod \"csi-node-driver-v54pl\" (UID: \"36a96baf-e8bf-431a-9ddf-e9ecc28a4802\") " pod="calico-system/csi-node-driver-v54pl" Apr 17 23:37:57.658763 kubelet[3213]: E0417 23:37:57.658484 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.658763 kubelet[3213]: W0417 23:37:57.658499 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.658763 kubelet[3213]: E0417 23:37:57.658512 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.660985 kubelet[3213]: E0417 23:37:57.660967 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.660985 kubelet[3213]: W0417 23:37:57.660985 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.661247 kubelet[3213]: E0417 23:37:57.661000 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.661450 kubelet[3213]: E0417 23:37:57.661388 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.662548 kubelet[3213]: W0417 23:37:57.662445 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.662548 kubelet[3213]: E0417 23:37:57.662468 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.663027 kubelet[3213]: E0417 23:37:57.662900 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.663027 kubelet[3213]: W0417 23:37:57.662914 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.663027 kubelet[3213]: E0417 23:37:57.662928 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.663562 kubelet[3213]: E0417 23:37:57.663392 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.663562 kubelet[3213]: W0417 23:37:57.663405 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.663562 kubelet[3213]: E0417 23:37:57.663418 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.664314 kubelet[3213]: E0417 23:37:57.664118 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.664314 kubelet[3213]: W0417 23:37:57.664130 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.664314 kubelet[3213]: E0417 23:37:57.664145 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.691969 containerd[1982]: time="2026-04-17T23:37:57.691735586Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:37:57.691969 containerd[1982]: time="2026-04-17T23:37:57.691828337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:37:57.692748 containerd[1982]: time="2026-04-17T23:37:57.691920757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:57.692748 containerd[1982]: time="2026-04-17T23:37:57.692149260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:57.703077 containerd[1982]: time="2026-04-17T23:37:57.702625301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tvjrn,Uid:eb7cdf61-8cf7-493c-89d5-2be3916f7a7e,Namespace:calico-system,Attempt:0,}" Apr 17 23:37:57.760107 kubelet[3213]: E0417 23:37:57.759312 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.760107 kubelet[3213]: W0417 23:37:57.759341 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.760107 kubelet[3213]: E0417 23:37:57.759368 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.765629 kubelet[3213]: E0417 23:37:57.763812 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.765629 kubelet[3213]: W0417 23:37:57.763835 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.765629 kubelet[3213]: E0417 23:37:57.763864 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.767021 kubelet[3213]: E0417 23:37:57.766971 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.767021 kubelet[3213]: W0417 23:37:57.766993 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.767021 kubelet[3213]: E0417 23:37:57.767017 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.767758 kubelet[3213]: E0417 23:37:57.767565 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.767758 kubelet[3213]: W0417 23:37:57.767582 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.767758 kubelet[3213]: E0417 23:37:57.767598 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.770298 kubelet[3213]: E0417 23:37:57.769744 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.770298 kubelet[3213]: W0417 23:37:57.769760 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.770298 kubelet[3213]: E0417 23:37:57.769775 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.769913 systemd[1]: Started cri-containerd-f388343cd2d51e011bfc05e66bdf330713df29cba6087e18a322c7689a238ce2.scope - libcontainer container f388343cd2d51e011bfc05e66bdf330713df29cba6087e18a322c7689a238ce2. Apr 17 23:37:57.771371 kubelet[3213]: E0417 23:37:57.771228 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.771371 kubelet[3213]: W0417 23:37:57.771242 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.771371 kubelet[3213]: E0417 23:37:57.771257 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.773924 kubelet[3213]: E0417 23:37:57.773899 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.773924 kubelet[3213]: W0417 23:37:57.773923 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.774073 kubelet[3213]: E0417 23:37:57.773938 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.777106 kubelet[3213]: E0417 23:37:57.777062 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.777226 kubelet[3213]: W0417 23:37:57.777211 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.777432 kubelet[3213]: E0417 23:37:57.777408 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.779215 kubelet[3213]: E0417 23:37:57.779195 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.779215 kubelet[3213]: W0417 23:37:57.779215 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.779362 kubelet[3213]: E0417 23:37:57.779231 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.781758 kubelet[3213]: E0417 23:37:57.781735 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.781758 kubelet[3213]: W0417 23:37:57.781754 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.782198 kubelet[3213]: E0417 23:37:57.781771 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.782790 kubelet[3213]: E0417 23:37:57.782769 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.782790 kubelet[3213]: W0417 23:37:57.782786 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.782973 kubelet[3213]: E0417 23:37:57.782801 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.785501 kubelet[3213]: E0417 23:37:57.785480 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.785501 kubelet[3213]: W0417 23:37:57.785496 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.786713 kubelet[3213]: E0417 23:37:57.785513 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.786804 kubelet[3213]: E0417 23:37:57.786715 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.786804 kubelet[3213]: W0417 23:37:57.786728 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.786804 kubelet[3213]: E0417 23:37:57.786745 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.787259 kubelet[3213]: E0417 23:37:57.787246 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.787362 kubelet[3213]: W0417 23:37:57.787260 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.787362 kubelet[3213]: E0417 23:37:57.787273 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.790598 kubelet[3213]: E0417 23:37:57.790577 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.790598 kubelet[3213]: W0417 23:37:57.790594 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.790961 kubelet[3213]: E0417 23:37:57.790611 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.791448 kubelet[3213]: E0417 23:37:57.791048 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.791448 kubelet[3213]: W0417 23:37:57.791065 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.791448 kubelet[3213]: E0417 23:37:57.791078 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.791448 kubelet[3213]: E0417 23:37:57.791311 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.791448 kubelet[3213]: W0417 23:37:57.791320 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.791448 kubelet[3213]: E0417 23:37:57.791332 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.792265 kubelet[3213]: E0417 23:37:57.792174 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.792265 kubelet[3213]: W0417 23:37:57.792186 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.792265 kubelet[3213]: E0417 23:37:57.792200 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.792751 kubelet[3213]: E0417 23:37:57.792502 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.792751 kubelet[3213]: W0417 23:37:57.792517 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.792751 kubelet[3213]: E0417 23:37:57.792530 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.793430 kubelet[3213]: E0417 23:37:57.793413 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.793430 kubelet[3213]: W0417 23:37:57.793430 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.793540 kubelet[3213]: E0417 23:37:57.793444 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.795715 kubelet[3213]: E0417 23:37:57.794860 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.795715 kubelet[3213]: W0417 23:37:57.794875 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.795715 kubelet[3213]: E0417 23:37:57.794890 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.795970 kubelet[3213]: E0417 23:37:57.795733 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.795970 kubelet[3213]: W0417 23:37:57.795745 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.795970 kubelet[3213]: E0417 23:37:57.795758 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.797222 kubelet[3213]: E0417 23:37:57.797162 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.797222 kubelet[3213]: W0417 23:37:57.797181 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.797222 kubelet[3213]: E0417 23:37:57.797196 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.798037 kubelet[3213]: E0417 23:37:57.798012 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.798037 kubelet[3213]: W0417 23:37:57.798028 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.798172 kubelet[3213]: E0417 23:37:57.798046 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.799080 kubelet[3213]: E0417 23:37:57.799053 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.799756 kubelet[3213]: W0417 23:37:57.799173 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.799756 kubelet[3213]: E0417 23:37:57.799192 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.804939 containerd[1982]: time="2026-04-17T23:37:57.803403630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:37:57.804939 containerd[1982]: time="2026-04-17T23:37:57.803506389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:37:57.804939 containerd[1982]: time="2026-04-17T23:37:57.803600925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:57.804939 containerd[1982]: time="2026-04-17T23:37:57.804800808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:37:57.823756 kubelet[3213]: E0417 23:37:57.822080 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:37:57.823756 kubelet[3213]: W0417 23:37:57.822100 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:37:57.823756 kubelet[3213]: E0417 23:37:57.822120 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:37:57.845942 systemd[1]: Started cri-containerd-2cebb6d134322661b641471eb99ccb1b769d84daad50d97f29b5f9b527edd901.scope - libcontainer container 2cebb6d134322661b641471eb99ccb1b769d84daad50d97f29b5f9b527edd901. Apr 17 23:37:57.885090 containerd[1982]: time="2026-04-17T23:37:57.885041449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-595cc8447d-ln8pp,Uid:8232a895-6753-4d9f-9394-b5ebfb5a79c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"f388343cd2d51e011bfc05e66bdf330713df29cba6087e18a322c7689a238ce2\"" Apr 17 23:37:57.888789 containerd[1982]: time="2026-04-17T23:37:57.888753415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 17 23:37:57.898521 containerd[1982]: time="2026-04-17T23:37:57.898460929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tvjrn,Uid:eb7cdf61-8cf7-493c-89d5-2be3916f7a7e,Namespace:calico-system,Attempt:0,} returns sandbox id \"2cebb6d134322661b641471eb99ccb1b769d84daad50d97f29b5f9b527edd901\"" Apr 17 23:37:58.970975 kubelet[3213]: E0417 23:37:58.970927 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v54pl" podUID="36a96baf-e8bf-431a-9ddf-e9ecc28a4802" Apr 17 23:37:59.243146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4183195723.mount: Deactivated successfully. Apr 17 23:38:00.074661 containerd[1982]: time="2026-04-17T23:38:00.074314700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:00.080098 containerd[1982]: time="2026-04-17T23:38:00.080031332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 17 23:38:00.082320 containerd[1982]: time="2026-04-17T23:38:00.082260288Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:00.090487 containerd[1982]: time="2026-04-17T23:38:00.090316375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:00.091468 containerd[1982]: time="2026-04-17T23:38:00.091237366Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.202423197s" Apr 17 23:38:00.091468 containerd[1982]: time="2026-04-17T23:38:00.091284608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 17 23:38:00.093203 containerd[1982]: time="2026-04-17T23:38:00.092824911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 17 23:38:00.123464 containerd[1982]: time="2026-04-17T23:38:00.123405637Z" level=info msg="CreateContainer within sandbox \"f388343cd2d51e011bfc05e66bdf330713df29cba6087e18a322c7689a238ce2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 17 23:38:00.197866 containerd[1982]: time="2026-04-17T23:38:00.197805802Z" level=info msg="CreateContainer within sandbox \"f388343cd2d51e011bfc05e66bdf330713df29cba6087e18a322c7689a238ce2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4c5e1bd1edc8b7db3e9ca6dfb582cdcb466c43e6a537cfe5c5ea0ce74041fdbc\"" Apr 17 23:38:00.198659 containerd[1982]: time="2026-04-17T23:38:00.198421265Z" level=info msg="StartContainer for \"4c5e1bd1edc8b7db3e9ca6dfb582cdcb466c43e6a537cfe5c5ea0ce74041fdbc\"" Apr 17 23:38:00.242938 systemd[1]: Started cri-containerd-4c5e1bd1edc8b7db3e9ca6dfb582cdcb466c43e6a537cfe5c5ea0ce74041fdbc.scope - libcontainer container 4c5e1bd1edc8b7db3e9ca6dfb582cdcb466c43e6a537cfe5c5ea0ce74041fdbc. Apr 17 23:38:00.294272 containerd[1982]: time="2026-04-17T23:38:00.294119697Z" level=info msg="StartContainer for \"4c5e1bd1edc8b7db3e9ca6dfb582cdcb466c43e6a537cfe5c5ea0ce74041fdbc\" returns successfully" Apr 17 23:38:00.971030 kubelet[3213]: E0417 23:38:00.970962 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v54pl" podUID="36a96baf-e8bf-431a-9ddf-e9ecc28a4802" Apr 17 23:38:01.115945 kubelet[3213]: I0417 23:38:01.115872 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-595cc8447d-ln8pp" podStartSLOduration=1.910791667 podStartE2EDuration="4.115851245s" podCreationTimestamp="2026-04-17 23:37:57 +0000 UTC" firstStartedPulling="2026-04-17 23:37:57.887604786 +0000 UTC m=+20.059917396" lastFinishedPulling="2026-04-17 23:38:00.092664362 +0000 UTC m=+22.264976974" observedRunningTime="2026-04-17 23:38:01.115483644 +0000 UTC m=+23.287796277" watchObservedRunningTime="2026-04-17 23:38:01.115851245 +0000 UTC m=+23.288163877" Apr 17 23:38:01.172199 kubelet[3213]: E0417 23:38:01.172153 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.172199 kubelet[3213]: W0417 23:38:01.172186 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.172423 kubelet[3213]: E0417 23:38:01.172211 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.173517 kubelet[3213]: E0417 23:38:01.172497 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.173517 kubelet[3213]: W0417 23:38:01.172511 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.173517 kubelet[3213]: E0417 23:38:01.172524 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.173517 kubelet[3213]: E0417 23:38:01.172804 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.173517 kubelet[3213]: W0417 23:38:01.172816 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.173517 kubelet[3213]: E0417 23:38:01.172827 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.173517 kubelet[3213]: E0417 23:38:01.173198 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.173517 kubelet[3213]: W0417 23:38:01.173209 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.173517 kubelet[3213]: E0417 23:38:01.173243 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.173517 kubelet[3213]: E0417 23:38:01.173505 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.174078 kubelet[3213]: W0417 23:38:01.173513 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.174078 kubelet[3213]: E0417 23:38:01.173524 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.174078 kubelet[3213]: E0417 23:38:01.173832 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.174078 kubelet[3213]: W0417 23:38:01.173843 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.174078 kubelet[3213]: E0417 23:38:01.173856 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.174316 kubelet[3213]: E0417 23:38:01.174180 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.174316 kubelet[3213]: W0417 23:38:01.174191 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.174316 kubelet[3213]: E0417 23:38:01.174203 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.175709 kubelet[3213]: E0417 23:38:01.174478 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.175709 kubelet[3213]: W0417 23:38:01.174521 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.175709 kubelet[3213]: E0417 23:38:01.174534 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.175709 kubelet[3213]: E0417 23:38:01.174865 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.175709 kubelet[3213]: W0417 23:38:01.174876 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.175709 kubelet[3213]: E0417 23:38:01.174888 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.175709 kubelet[3213]: E0417 23:38:01.175121 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.175709 kubelet[3213]: W0417 23:38:01.175129 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.175709 kubelet[3213]: E0417 23:38:01.175139 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.175709 kubelet[3213]: E0417 23:38:01.175344 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.176258 kubelet[3213]: W0417 23:38:01.175369 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.176258 kubelet[3213]: E0417 23:38:01.175379 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.176258 kubelet[3213]: E0417 23:38:01.175566 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.176258 kubelet[3213]: W0417 23:38:01.175583 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.176258 kubelet[3213]: E0417 23:38:01.175593 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.176258 kubelet[3213]: E0417 23:38:01.175864 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.176258 kubelet[3213]: W0417 23:38:01.175874 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.176258 kubelet[3213]: E0417 23:38:01.175886 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.176258 kubelet[3213]: E0417 23:38:01.176105 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.176258 kubelet[3213]: W0417 23:38:01.176114 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.176777 kubelet[3213]: E0417 23:38:01.176126 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.176777 kubelet[3213]: E0417 23:38:01.176363 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.176777 kubelet[3213]: W0417 23:38:01.176373 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.176777 kubelet[3213]: E0417 23:38:01.176384 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.197120 kubelet[3213]: E0417 23:38:01.197087 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.197120 kubelet[3213]: W0417 23:38:01.197117 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.197334 kubelet[3213]: E0417 23:38:01.197140 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.197907 kubelet[3213]: E0417 23:38:01.197513 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.197907 kubelet[3213]: W0417 23:38:01.197530 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.197907 kubelet[3213]: E0417 23:38:01.197546 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.198155 kubelet[3213]: E0417 23:38:01.197931 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.198155 kubelet[3213]: W0417 23:38:01.197943 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.198155 kubelet[3213]: E0417 23:38:01.197957 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.199279 kubelet[3213]: E0417 23:38:01.198758 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.199279 kubelet[3213]: W0417 23:38:01.198776 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.199279 kubelet[3213]: E0417 23:38:01.198791 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.199627 kubelet[3213]: E0417 23:38:01.199610 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.199717 kubelet[3213]: W0417 23:38:01.199628 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.199717 kubelet[3213]: E0417 23:38:01.199645 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.200265 kubelet[3213]: E0417 23:38:01.200217 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.200265 kubelet[3213]: W0417 23:38:01.200232 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.200265 kubelet[3213]: E0417 23:38:01.200244 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.200896 kubelet[3213]: E0417 23:38:01.200776 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.200896 kubelet[3213]: W0417 23:38:01.200787 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.200896 kubelet[3213]: E0417 23:38:01.200800 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.201765 kubelet[3213]: E0417 23:38:01.201742 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.201867 kubelet[3213]: W0417 23:38:01.201855 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.201945 kubelet[3213]: E0417 23:38:01.201934 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.202304 kubelet[3213]: E0417 23:38:01.202291 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.202410 kubelet[3213]: W0417 23:38:01.202397 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.202486 kubelet[3213]: E0417 23:38:01.202474 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.203244 kubelet[3213]: E0417 23:38:01.203230 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.203431 kubelet[3213]: W0417 23:38:01.203323 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.203431 kubelet[3213]: E0417 23:38:01.203341 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.203820 kubelet[3213]: E0417 23:38:01.203684 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.203820 kubelet[3213]: W0417 23:38:01.203726 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.203820 kubelet[3213]: E0417 23:38:01.203739 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.204421 kubelet[3213]: E0417 23:38:01.204153 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.204421 kubelet[3213]: W0417 23:38:01.204165 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.204421 kubelet[3213]: E0417 23:38:01.204177 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.204712 kubelet[3213]: E0417 23:38:01.204607 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.204712 kubelet[3213]: W0417 23:38:01.204619 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.204712 kubelet[3213]: E0417 23:38:01.204631 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.205293 kubelet[3213]: E0417 23:38:01.205096 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.205293 kubelet[3213]: W0417 23:38:01.205109 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.205293 kubelet[3213]: E0417 23:38:01.205122 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.205977 kubelet[3213]: E0417 23:38:01.205783 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.205977 kubelet[3213]: W0417 23:38:01.205797 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.205977 kubelet[3213]: E0417 23:38:01.205810 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.206476 kubelet[3213]: E0417 23:38:01.206297 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.206476 kubelet[3213]: W0417 23:38:01.206310 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.206476 kubelet[3213]: E0417 23:38:01.206324 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.207165 kubelet[3213]: E0417 23:38:01.207004 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.207165 kubelet[3213]: W0417 23:38:01.207017 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.207165 kubelet[3213]: E0417 23:38:01.207031 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.207534 kubelet[3213]: E0417 23:38:01.207483 3213 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 17 23:38:01.207534 kubelet[3213]: W0417 23:38:01.207494 3213 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 17 23:38:01.207534 kubelet[3213]: E0417 23:38:01.207509 3213 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 17 23:38:01.313521 containerd[1982]: time="2026-04-17T23:38:01.313393910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:01.315318 containerd[1982]: time="2026-04-17T23:38:01.315115286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 17 23:38:01.318719 containerd[1982]: time="2026-04-17T23:38:01.317674988Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:01.321148 containerd[1982]: time="2026-04-17T23:38:01.321090528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:01.322108 containerd[1982]: time="2026-04-17T23:38:01.322069011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.229142509s" Apr 17 23:38:01.322108 containerd[1982]: time="2026-04-17T23:38:01.322105837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 17 23:38:01.328816 containerd[1982]: time="2026-04-17T23:38:01.328762657Z" level=info msg="CreateContainer within sandbox \"2cebb6d134322661b641471eb99ccb1b769d84daad50d97f29b5f9b527edd901\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 17 23:38:01.361721 containerd[1982]: time="2026-04-17T23:38:01.361638837Z" level=info msg="CreateContainer within sandbox \"2cebb6d134322661b641471eb99ccb1b769d84daad50d97f29b5f9b527edd901\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fb776ecb516d59c2e0c82603b0e325a50767455582c24f6302b0a56c3b3f8811\"" Apr 17 23:38:01.365841 containerd[1982]: time="2026-04-17T23:38:01.364987600Z" level=info msg="StartContainer for \"fb776ecb516d59c2e0c82603b0e325a50767455582c24f6302b0a56c3b3f8811\"" Apr 17 23:38:01.465018 systemd[1]: Started cri-containerd-fb776ecb516d59c2e0c82603b0e325a50767455582c24f6302b0a56c3b3f8811.scope - libcontainer container fb776ecb516d59c2e0c82603b0e325a50767455582c24f6302b0a56c3b3f8811. Apr 17 23:38:01.500172 containerd[1982]: time="2026-04-17T23:38:01.499887654Z" level=info msg="StartContainer for \"fb776ecb516d59c2e0c82603b0e325a50767455582c24f6302b0a56c3b3f8811\" returns successfully" Apr 17 23:38:01.515115 systemd[1]: cri-containerd-fb776ecb516d59c2e0c82603b0e325a50767455582c24f6302b0a56c3b3f8811.scope: Deactivated successfully. Apr 17 23:38:01.639524 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb776ecb516d59c2e0c82603b0e325a50767455582c24f6302b0a56c3b3f8811-rootfs.mount: Deactivated successfully. Apr 17 23:38:01.833121 containerd[1982]: time="2026-04-17T23:38:01.744403462Z" level=info msg="shim disconnected" id=fb776ecb516d59c2e0c82603b0e325a50767455582c24f6302b0a56c3b3f8811 namespace=k8s.io Apr 17 23:38:01.833121 containerd[1982]: time="2026-04-17T23:38:01.833113688Z" level=warning msg="cleaning up after shim disconnected" id=fb776ecb516d59c2e0c82603b0e325a50767455582c24f6302b0a56c3b3f8811 namespace=k8s.io Apr 17 23:38:01.833435 containerd[1982]: time="2026-04-17T23:38:01.833134709Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:02.106624 kubelet[3213]: I0417 23:38:02.106540 3213 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:38:02.109275 containerd[1982]: time="2026-04-17T23:38:02.109238726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 17 23:38:02.971499 kubelet[3213]: E0417 23:38:02.971452 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v54pl" podUID="36a96baf-e8bf-431a-9ddf-e9ecc28a4802" Apr 17 23:38:03.120723 kubelet[3213]: I0417 23:38:03.120382 3213 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:38:04.970802 kubelet[3213]: E0417 23:38:04.970625 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v54pl" podUID="36a96baf-e8bf-431a-9ddf-e9ecc28a4802" Apr 17 23:38:06.971127 kubelet[3213]: E0417 23:38:06.971081 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v54pl" podUID="36a96baf-e8bf-431a-9ddf-e9ecc28a4802" Apr 17 23:38:08.973077 kubelet[3213]: E0417 23:38:08.971473 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v54pl" podUID="36a96baf-e8bf-431a-9ddf-e9ecc28a4802" Apr 17 23:38:10.970823 kubelet[3213]: E0417 23:38:10.970761 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v54pl" podUID="36a96baf-e8bf-431a-9ddf-e9ecc28a4802" Apr 17 23:38:12.030802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2002162167.mount: Deactivated successfully. Apr 17 23:38:12.091146 containerd[1982]: time="2026-04-17T23:38:12.083416961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:12.093183 containerd[1982]: time="2026-04-17T23:38:12.088053043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 17 23:38:12.095650 containerd[1982]: time="2026-04-17T23:38:12.095576824Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:12.099577 containerd[1982]: time="2026-04-17T23:38:12.099510751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:12.101110 containerd[1982]: time="2026-04-17T23:38:12.100261928Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 9.990975049s" Apr 17 23:38:12.101110 containerd[1982]: time="2026-04-17T23:38:12.100307275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 17 23:38:12.212313 containerd[1982]: time="2026-04-17T23:38:12.212261184Z" level=info msg="CreateContainer within sandbox \"2cebb6d134322661b641471eb99ccb1b769d84daad50d97f29b5f9b527edd901\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 17 23:38:12.270163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2699983130.mount: Deactivated successfully. Apr 17 23:38:12.276649 containerd[1982]: time="2026-04-17T23:38:12.276599894Z" level=info msg="CreateContainer within sandbox \"2cebb6d134322661b641471eb99ccb1b769d84daad50d97f29b5f9b527edd901\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"9e16c866d439281c3bf2289dc8d232ddc17ca4de4e374d0bcb3b2379d5281981\"" Apr 17 23:38:12.277403 containerd[1982]: time="2026-04-17T23:38:12.277357792Z" level=info msg="StartContainer for \"9e16c866d439281c3bf2289dc8d232ddc17ca4de4e374d0bcb3b2379d5281981\"" Apr 17 23:38:12.353885 systemd[1]: Started cri-containerd-9e16c866d439281c3bf2289dc8d232ddc17ca4de4e374d0bcb3b2379d5281981.scope - libcontainer container 9e16c866d439281c3bf2289dc8d232ddc17ca4de4e374d0bcb3b2379d5281981. Apr 17 23:38:12.451194 containerd[1982]: time="2026-04-17T23:38:12.451139831Z" level=info msg="StartContainer for \"9e16c866d439281c3bf2289dc8d232ddc17ca4de4e374d0bcb3b2379d5281981\" returns successfully" Apr 17 23:38:12.512573 systemd[1]: cri-containerd-9e16c866d439281c3bf2289dc8d232ddc17ca4de4e374d0bcb3b2379d5281981.scope: Deactivated successfully. Apr 17 23:38:12.636119 containerd[1982]: time="2026-04-17T23:38:12.635951044Z" level=info msg="shim disconnected" id=9e16c866d439281c3bf2289dc8d232ddc17ca4de4e374d0bcb3b2379d5281981 namespace=k8s.io Apr 17 23:38:12.636119 containerd[1982]: time="2026-04-17T23:38:12.636023125Z" level=warning msg="cleaning up after shim disconnected" id=9e16c866d439281c3bf2289dc8d232ddc17ca4de4e374d0bcb3b2379d5281981 namespace=k8s.io Apr 17 23:38:12.636119 containerd[1982]: time="2026-04-17T23:38:12.636035965Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:12.971376 kubelet[3213]: E0417 23:38:12.971241 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v54pl" podUID="36a96baf-e8bf-431a-9ddf-e9ecc28a4802" Apr 17 23:38:13.029164 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e16c866d439281c3bf2289dc8d232ddc17ca4de4e374d0bcb3b2379d5281981-rootfs.mount: Deactivated successfully. Apr 17 23:38:13.145199 containerd[1982]: time="2026-04-17T23:38:13.144154346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 17 23:38:14.971111 kubelet[3213]: E0417 23:38:14.971039 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v54pl" podUID="36a96baf-e8bf-431a-9ddf-e9ecc28a4802" Apr 17 23:38:16.287821 containerd[1982]: time="2026-04-17T23:38:16.287594514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:16.289360 containerd[1982]: time="2026-04-17T23:38:16.289078127Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 17 23:38:16.295526 containerd[1982]: time="2026-04-17T23:38:16.295189141Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:16.299644 containerd[1982]: time="2026-04-17T23:38:16.298667434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:16.300269 containerd[1982]: time="2026-04-17T23:38:16.300230494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.1560296s" Apr 17 23:38:16.300359 containerd[1982]: time="2026-04-17T23:38:16.300278296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 17 23:38:16.305427 containerd[1982]: time="2026-04-17T23:38:16.305386088Z" level=info msg="CreateContainer within sandbox \"2cebb6d134322661b641471eb99ccb1b769d84daad50d97f29b5f9b527edd901\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 17 23:38:16.325928 containerd[1982]: time="2026-04-17T23:38:16.325884345Z" level=info msg="CreateContainer within sandbox \"2cebb6d134322661b641471eb99ccb1b769d84daad50d97f29b5f9b527edd901\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0e01d79359fe2937a01ca666ee3ca08e4a84213bb73874b744a2cd501c077447\"" Apr 17 23:38:16.328051 containerd[1982]: time="2026-04-17T23:38:16.326796321Z" level=info msg="StartContainer for \"0e01d79359fe2937a01ca666ee3ca08e4a84213bb73874b744a2cd501c077447\"" Apr 17 23:38:16.363986 systemd[1]: run-containerd-runc-k8s.io-0e01d79359fe2937a01ca666ee3ca08e4a84213bb73874b744a2cd501c077447-runc.q8Qi2r.mount: Deactivated successfully. Apr 17 23:38:16.373910 systemd[1]: Started cri-containerd-0e01d79359fe2937a01ca666ee3ca08e4a84213bb73874b744a2cd501c077447.scope - libcontainer container 0e01d79359fe2937a01ca666ee3ca08e4a84213bb73874b744a2cd501c077447. Apr 17 23:38:16.409796 containerd[1982]: time="2026-04-17T23:38:16.409743292Z" level=info msg="StartContainer for \"0e01d79359fe2937a01ca666ee3ca08e4a84213bb73874b744a2cd501c077447\" returns successfully" Apr 17 23:38:16.994724 kubelet[3213]: E0417 23:38:16.992795 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v54pl" podUID="36a96baf-e8bf-431a-9ddf-e9ecc28a4802" Apr 17 23:38:17.408178 systemd[1]: cri-containerd-0e01d79359fe2937a01ca666ee3ca08e4a84213bb73874b744a2cd501c077447.scope: Deactivated successfully. Apr 17 23:38:17.455517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0e01d79359fe2937a01ca666ee3ca08e4a84213bb73874b744a2cd501c077447-rootfs.mount: Deactivated successfully. Apr 17 23:38:17.464021 containerd[1982]: time="2026-04-17T23:38:17.463467783Z" level=info msg="shim disconnected" id=0e01d79359fe2937a01ca666ee3ca08e4a84213bb73874b744a2cd501c077447 namespace=k8s.io Apr 17 23:38:17.464021 containerd[1982]: time="2026-04-17T23:38:17.463536144Z" level=warning msg="cleaning up after shim disconnected" id=0e01d79359fe2937a01ca666ee3ca08e4a84213bb73874b744a2cd501c077447 namespace=k8s.io Apr 17 23:38:17.464021 containerd[1982]: time="2026-04-17T23:38:17.463550654Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:17.484152 kubelet[3213]: I0417 23:38:17.459091 3213 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 17 23:38:17.695214 kubelet[3213]: I0417 23:38:17.695103 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f5613b0-e0ec-4ed6-ae4c-5a83ab1b043e-config-volume\") pod \"coredns-674b8bbfcf-c8jqp\" (UID: \"1f5613b0-e0ec-4ed6-ae4c-5a83ab1b043e\") " pod="kube-system/coredns-674b8bbfcf-c8jqp" Apr 17 23:38:17.695214 kubelet[3213]: I0417 23:38:17.695192 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac-config\") pod \"goldmane-5b85766d88-hzpfm\" (UID: \"ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac\") " pod="calico-system/goldmane-5b85766d88-hzpfm" Apr 17 23:38:17.695390 kubelet[3213]: I0417 23:38:17.695220 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac-goldmane-key-pair\") pod \"goldmane-5b85766d88-hzpfm\" (UID: \"ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac\") " pod="calico-system/goldmane-5b85766d88-hzpfm" Apr 17 23:38:17.695390 kubelet[3213]: I0417 23:38:17.695245 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a4f940d-c537-4a63-a007-4209639f0172-tigera-ca-bundle\") pod \"calico-kube-controllers-7c9ff59c-q46w4\" (UID: \"4a4f940d-c537-4a63-a007-4209639f0172\") " pod="calico-system/calico-kube-controllers-7c9ff59c-q46w4" Apr 17 23:38:17.695390 kubelet[3213]: I0417 23:38:17.695274 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvmm5\" (UniqueName: \"kubernetes.io/projected/1f5613b0-e0ec-4ed6-ae4c-5a83ab1b043e-kube-api-access-cvmm5\") pod \"coredns-674b8bbfcf-c8jqp\" (UID: \"1f5613b0-e0ec-4ed6-ae4c-5a83ab1b043e\") " pod="kube-system/coredns-674b8bbfcf-c8jqp" Apr 17 23:38:17.695390 kubelet[3213]: I0417 23:38:17.695299 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-hzpfm\" (UID: \"ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac\") " pod="calico-system/goldmane-5b85766d88-hzpfm" Apr 17 23:38:17.695390 kubelet[3213]: I0417 23:38:17.695320 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ab6588a0-8817-4e9c-935c-c360a970fe47-config-volume\") pod \"coredns-674b8bbfcf-glngr\" (UID: \"ab6588a0-8817-4e9c-935c-c360a970fe47\") " pod="kube-system/coredns-674b8bbfcf-glngr" Apr 17 23:38:17.695629 kubelet[3213]: I0417 23:38:17.695359 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69tf4\" (UniqueName: \"kubernetes.io/projected/ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac-kube-api-access-69tf4\") pod \"goldmane-5b85766d88-hzpfm\" (UID: \"ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac\") " pod="calico-system/goldmane-5b85766d88-hzpfm" Apr 17 23:38:17.695629 kubelet[3213]: I0417 23:38:17.695386 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qvl6\" (UniqueName: \"kubernetes.io/projected/ab6588a0-8817-4e9c-935c-c360a970fe47-kube-api-access-4qvl6\") pod \"coredns-674b8bbfcf-glngr\" (UID: \"ab6588a0-8817-4e9c-935c-c360a970fe47\") " pod="kube-system/coredns-674b8bbfcf-glngr" Apr 17 23:38:17.695629 kubelet[3213]: I0417 23:38:17.695415 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whfrm\" (UniqueName: \"kubernetes.io/projected/4a4f940d-c537-4a63-a007-4209639f0172-kube-api-access-whfrm\") pod \"calico-kube-controllers-7c9ff59c-q46w4\" (UID: \"4a4f940d-c537-4a63-a007-4209639f0172\") " pod="calico-system/calico-kube-controllers-7c9ff59c-q46w4" Apr 17 23:38:17.719092 systemd[1]: Created slice kubepods-besteffort-pod4a4f940d_c537_4a63_a007_4209639f0172.slice - libcontainer container kubepods-besteffort-pod4a4f940d_c537_4a63_a007_4209639f0172.slice. Apr 17 23:38:17.740324 systemd[1]: Created slice kubepods-besteffort-podcec6c23d_ce9d_4878_9eb1_de1fc8bcdc4b.slice - libcontainer container kubepods-besteffort-podcec6c23d_ce9d_4878_9eb1_de1fc8bcdc4b.slice. Apr 17 23:38:17.751265 systemd[1]: Created slice kubepods-besteffort-podd3c6abe4_044b_4f66_a78f_bda6e0004d81.slice - libcontainer container kubepods-besteffort-podd3c6abe4_044b_4f66_a78f_bda6e0004d81.slice. Apr 17 23:38:17.762626 systemd[1]: Created slice kubepods-burstable-pod1f5613b0_e0ec_4ed6_ae4c_5a83ab1b043e.slice - libcontainer container kubepods-burstable-pod1f5613b0_e0ec_4ed6_ae4c_5a83ab1b043e.slice. Apr 17 23:38:17.772576 systemd[1]: Created slice kubepods-besteffort-podebc34f2a_f6b4_48f0_9c63_2ba7bb9bc6ac.slice - libcontainer container kubepods-besteffort-podebc34f2a_f6b4_48f0_9c63_2ba7bb9bc6ac.slice. Apr 17 23:38:17.779563 systemd[1]: Created slice kubepods-besteffort-pod1bc2dee4_01d4_4b29_b645_fc728a206f14.slice - libcontainer container kubepods-besteffort-pod1bc2dee4_01d4_4b29_b645_fc728a206f14.slice. Apr 17 23:38:17.792491 systemd[1]: Created slice kubepods-burstable-podab6588a0_8817_4e9c_935c_c360a970fe47.slice - libcontainer container kubepods-burstable-podab6588a0_8817_4e9c_935c_c360a970fe47.slice. Apr 17 23:38:17.796222 kubelet[3213]: I0417 23:38:17.796182 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cec6c23d-ce9d-4878-9eb1-de1fc8bcdc4b-calico-apiserver-certs\") pod \"calico-apiserver-84cf778744-tr4mm\" (UID: \"cec6c23d-ce9d-4878-9eb1-de1fc8bcdc4b\") " pod="calico-system/calico-apiserver-84cf778744-tr4mm" Apr 17 23:38:17.796341 kubelet[3213]: I0417 23:38:17.796269 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rvgh\" (UniqueName: \"kubernetes.io/projected/cec6c23d-ce9d-4878-9eb1-de1fc8bcdc4b-kube-api-access-2rvgh\") pod \"calico-apiserver-84cf778744-tr4mm\" (UID: \"cec6c23d-ce9d-4878-9eb1-de1fc8bcdc4b\") " pod="calico-system/calico-apiserver-84cf778744-tr4mm" Apr 17 23:38:17.796341 kubelet[3213]: I0417 23:38:17.796317 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1bc2dee4-01d4-4b29-b645-fc728a206f14-calico-apiserver-certs\") pod \"calico-apiserver-84cf778744-ml2v7\" (UID: \"1bc2dee4-01d4-4b29-b645-fc728a206f14\") " pod="calico-system/calico-apiserver-84cf778744-ml2v7" Apr 17 23:38:17.796468 kubelet[3213]: I0417 23:38:17.796371 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99kng\" (UniqueName: \"kubernetes.io/projected/1bc2dee4-01d4-4b29-b645-fc728a206f14-kube-api-access-99kng\") pod \"calico-apiserver-84cf778744-ml2v7\" (UID: \"1bc2dee4-01d4-4b29-b645-fc728a206f14\") " pod="calico-system/calico-apiserver-84cf778744-ml2v7" Apr 17 23:38:17.898454 kubelet[3213]: I0417 23:38:17.897648 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d3c6abe4-044b-4f66-a78f-bda6e0004d81-nginx-config\") pod \"whisker-65448f5dbb-ln8lt\" (UID: \"d3c6abe4-044b-4f66-a78f-bda6e0004d81\") " pod="calico-system/whisker-65448f5dbb-ln8lt" Apr 17 23:38:17.898454 kubelet[3213]: I0417 23:38:17.897729 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d3c6abe4-044b-4f66-a78f-bda6e0004d81-whisker-backend-key-pair\") pod \"whisker-65448f5dbb-ln8lt\" (UID: \"d3c6abe4-044b-4f66-a78f-bda6e0004d81\") " pod="calico-system/whisker-65448f5dbb-ln8lt" Apr 17 23:38:17.898454 kubelet[3213]: I0417 23:38:17.897776 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79bgv\" (UniqueName: \"kubernetes.io/projected/d3c6abe4-044b-4f66-a78f-bda6e0004d81-kube-api-access-79bgv\") pod \"whisker-65448f5dbb-ln8lt\" (UID: \"d3c6abe4-044b-4f66-a78f-bda6e0004d81\") " pod="calico-system/whisker-65448f5dbb-ln8lt" Apr 17 23:38:17.898454 kubelet[3213]: I0417 23:38:17.897884 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3c6abe4-044b-4f66-a78f-bda6e0004d81-whisker-ca-bundle\") pod \"whisker-65448f5dbb-ln8lt\" (UID: \"d3c6abe4-044b-4f66-a78f-bda6e0004d81\") " pod="calico-system/whisker-65448f5dbb-ln8lt" Apr 17 23:38:18.042033 containerd[1982]: time="2026-04-17T23:38:18.041925070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c9ff59c-q46w4,Uid:4a4f940d-c537-4a63-a007-4209639f0172,Namespace:calico-system,Attempt:0,}" Apr 17 23:38:18.046337 containerd[1982]: time="2026-04-17T23:38:18.046296870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84cf778744-tr4mm,Uid:cec6c23d-ce9d-4878-9eb1-de1fc8bcdc4b,Namespace:calico-system,Attempt:0,}" Apr 17 23:38:18.054863 containerd[1982]: time="2026-04-17T23:38:18.054619180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65448f5dbb-ln8lt,Uid:d3c6abe4-044b-4f66-a78f-bda6e0004d81,Namespace:calico-system,Attempt:0,}" Apr 17 23:38:18.069208 containerd[1982]: time="2026-04-17T23:38:18.069164545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c8jqp,Uid:1f5613b0-e0ec-4ed6-ae4c-5a83ab1b043e,Namespace:kube-system,Attempt:0,}" Apr 17 23:38:18.077625 containerd[1982]: time="2026-04-17T23:38:18.077567151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-hzpfm,Uid:ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac,Namespace:calico-system,Attempt:0,}" Apr 17 23:38:18.089040 containerd[1982]: time="2026-04-17T23:38:18.088995415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84cf778744-ml2v7,Uid:1bc2dee4-01d4-4b29-b645-fc728a206f14,Namespace:calico-system,Attempt:0,}" Apr 17 23:38:18.114414 containerd[1982]: time="2026-04-17T23:38:18.114368321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-glngr,Uid:ab6588a0-8817-4e9c-935c-c360a970fe47,Namespace:kube-system,Attempt:0,}" Apr 17 23:38:18.267849 containerd[1982]: time="2026-04-17T23:38:18.267055462Z" level=info msg="CreateContainer within sandbox \"2cebb6d134322661b641471eb99ccb1b769d84daad50d97f29b5f9b527edd901\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 17 23:38:18.339734 containerd[1982]: time="2026-04-17T23:38:18.337127126Z" level=info msg="CreateContainer within sandbox \"2cebb6d134322661b641471eb99ccb1b769d84daad50d97f29b5f9b527edd901\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"42b56c593fa17678a456fa58dd32fc97deb184654651a28f31f14544986efe10\"" Apr 17 23:38:18.345443 containerd[1982]: time="2026-04-17T23:38:18.345374407Z" level=info msg="StartContainer for \"42b56c593fa17678a456fa58dd32fc97deb184654651a28f31f14544986efe10\"" Apr 17 23:38:18.425003 systemd[1]: Started cri-containerd-42b56c593fa17678a456fa58dd32fc97deb184654651a28f31f14544986efe10.scope - libcontainer container 42b56c593fa17678a456fa58dd32fc97deb184654651a28f31f14544986efe10. Apr 17 23:38:18.532944 containerd[1982]: time="2026-04-17T23:38:18.532860399Z" level=info msg="StartContainer for \"42b56c593fa17678a456fa58dd32fc97deb184654651a28f31f14544986efe10\" returns successfully" Apr 17 23:38:18.774740 containerd[1982]: time="2026-04-17T23:38:18.769855662Z" level=error msg="Failed to destroy network for sandbox \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.779284 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4-shm.mount: Deactivated successfully. Apr 17 23:38:18.797015 containerd[1982]: time="2026-04-17T23:38:18.796953329Z" level=error msg="Failed to destroy network for sandbox \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.797420 containerd[1982]: time="2026-04-17T23:38:18.797377496Z" level=error msg="encountered an error cleaning up failed sandbox \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.797516 containerd[1982]: time="2026-04-17T23:38:18.797468673Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84cf778744-ml2v7,Uid:1bc2dee4-01d4-4b29-b645-fc728a206f14,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.803352 containerd[1982]: time="2026-04-17T23:38:18.803248998Z" level=error msg="Failed to destroy network for sandbox \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.803914 containerd[1982]: time="2026-04-17T23:38:18.803875834Z" level=error msg="encountered an error cleaning up failed sandbox \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.804102 containerd[1982]: time="2026-04-17T23:38:18.804073725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84cf778744-tr4mm,Uid:cec6c23d-ce9d-4878-9eb1-de1fc8bcdc4b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.804391 containerd[1982]: time="2026-04-17T23:38:18.804362896Z" level=error msg="Failed to destroy network for sandbox \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.804869 containerd[1982]: time="2026-04-17T23:38:18.804483001Z" level=error msg="encountered an error cleaning up failed sandbox \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.806315 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9-shm.mount: Deactivated successfully. Apr 17 23:38:18.807148 containerd[1982]: time="2026-04-17T23:38:18.807115334Z" level=error msg="encountered an error cleaning up failed sandbox \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.807508 containerd[1982]: time="2026-04-17T23:38:18.807463762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-hzpfm,Uid:ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.808515 kubelet[3213]: E0417 23:38:18.808459 3213 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.812351 containerd[1982]: time="2026-04-17T23:38:18.812188853Z" level=error msg="Failed to destroy network for sandbox \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.812476 kubelet[3213]: E0417 23:38:18.812213 3213 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.812541 containerd[1982]: time="2026-04-17T23:38:18.812399731Z" level=error msg="Failed to destroy network for sandbox \"714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.812591 containerd[1982]: time="2026-04-17T23:38:18.812535663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c8jqp,Uid:1f5613b0-e0ec-4ed6-ae4c-5a83ab1b043e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.814546 kubelet[3213]: E0417 23:38:18.814013 3213 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-84cf778744-ml2v7" Apr 17 23:38:18.814546 kubelet[3213]: E0417 23:38:18.814080 3213 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-84cf778744-ml2v7" Apr 17 23:38:18.814546 kubelet[3213]: E0417 23:38:18.814151 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84cf778744-ml2v7_calico-system(1bc2dee4-01d4-4b29-b645-fc728a206f14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84cf778744-ml2v7_calico-system(1bc2dee4-01d4-4b29-b645-fc728a206f14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-84cf778744-ml2v7" podUID="1bc2dee4-01d4-4b29-b645-fc728a206f14" Apr 17 23:38:18.815279 containerd[1982]: time="2026-04-17T23:38:18.813959450Z" level=error msg="encountered an error cleaning up failed sandbox \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.815279 containerd[1982]: time="2026-04-17T23:38:18.815158503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65448f5dbb-ln8lt,Uid:d3c6abe4-044b-4f66-a78f-bda6e0004d81,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.815453 kubelet[3213]: E0417 23:38:18.814965 3213 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-84cf778744-tr4mm" Apr 17 23:38:18.815453 kubelet[3213]: E0417 23:38:18.815012 3213 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-84cf778744-tr4mm" Apr 17 23:38:18.815453 kubelet[3213]: E0417 23:38:18.815070 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84cf778744-tr4mm_calico-system(cec6c23d-ce9d-4878-9eb1-de1fc8bcdc4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84cf778744-tr4mm_calico-system(cec6c23d-ce9d-4878-9eb1-de1fc8bcdc4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-84cf778744-tr4mm" podUID="cec6c23d-ce9d-4878-9eb1-de1fc8bcdc4b" Apr 17 23:38:18.816619 kubelet[3213]: E0417 23:38:18.816208 3213 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.816619 kubelet[3213]: E0417 23:38:18.816268 3213 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-hzpfm" Apr 17 23:38:18.816619 kubelet[3213]: E0417 23:38:18.816297 3213 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-hzpfm" Apr 17 23:38:18.816444 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb-shm.mount: Deactivated successfully. Apr 17 23:38:18.817106 kubelet[3213]: E0417 23:38:18.816347 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-hzpfm_calico-system(ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-hzpfm_calico-system(ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-hzpfm" podUID="ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac" Apr 17 23:38:18.817106 kubelet[3213]: E0417 23:38:18.816391 3213 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.817106 kubelet[3213]: E0417 23:38:18.816425 3213 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-c8jqp" Apr 17 23:38:18.816601 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58-shm.mount: Deactivated successfully. Apr 17 23:38:18.817478 kubelet[3213]: E0417 23:38:18.816447 3213 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-c8jqp" Apr 17 23:38:18.817478 kubelet[3213]: E0417 23:38:18.816483 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-c8jqp_kube-system(1f5613b0-e0ec-4ed6-ae4c-5a83ab1b043e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-c8jqp_kube-system(1f5613b0-e0ec-4ed6-ae4c-5a83ab1b043e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-c8jqp" podUID="1f5613b0-e0ec-4ed6-ae4c-5a83ab1b043e" Apr 17 23:38:18.817478 kubelet[3213]: E0417 23:38:18.816523 3213 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.817730 kubelet[3213]: E0417 23:38:18.816550 3213 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65448f5dbb-ln8lt" Apr 17 23:38:18.817730 kubelet[3213]: E0417 23:38:18.816571 3213 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65448f5dbb-ln8lt" Apr 17 23:38:18.817730 kubelet[3213]: E0417 23:38:18.816792 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65448f5dbb-ln8lt_calico-system(d3c6abe4-044b-4f66-a78f-bda6e0004d81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65448f5dbb-ln8lt_calico-system(d3c6abe4-044b-4f66-a78f-bda6e0004d81)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65448f5dbb-ln8lt" podUID="d3c6abe4-044b-4f66-a78f-bda6e0004d81" Apr 17 23:38:18.818074 containerd[1982]: time="2026-04-17T23:38:18.818030364Z" level=error msg="encountered an error cleaning up failed sandbox \"714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.818175 containerd[1982]: time="2026-04-17T23:38:18.818107946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-glngr,Uid:ab6588a0-8817-4e9c-935c-c360a970fe47,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.818382 kubelet[3213]: E0417 23:38:18.818358 3213 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.818606 kubelet[3213]: E0417 23:38:18.818486 3213 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-glngr" Apr 17 23:38:18.818606 kubelet[3213]: E0417 23:38:18.818513 3213 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-glngr" Apr 17 23:38:18.818926 kubelet[3213]: E0417 23:38:18.818847 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-glngr_kube-system(ab6588a0-8817-4e9c-935c-c360a970fe47)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-glngr_kube-system(ab6588a0-8817-4e9c-935c-c360a970fe47)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-glngr" podUID="ab6588a0-8817-4e9c-935c-c360a970fe47" Apr 17 23:38:18.824239 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4-shm.mount: Deactivated successfully. Apr 17 23:38:18.824373 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43-shm.mount: Deactivated successfully. Apr 17 23:38:18.829073 containerd[1982]: time="2026-04-17T23:38:18.828778233Z" level=error msg="Failed to destroy network for sandbox \"fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.829713 containerd[1982]: time="2026-04-17T23:38:18.829497166Z" level=error msg="encountered an error cleaning up failed sandbox \"fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.829713 containerd[1982]: time="2026-04-17T23:38:18.829580538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c9ff59c-q46w4,Uid:4a4f940d-c537-4a63-a007-4209639f0172,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.830737 kubelet[3213]: E0417 23:38:18.830092 3213 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:18.830737 kubelet[3213]: E0417 23:38:18.830165 3213 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c9ff59c-q46w4" Apr 17 23:38:18.830737 kubelet[3213]: E0417 23:38:18.830308 3213 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c9ff59c-q46w4" Apr 17 23:38:18.831034 kubelet[3213]: E0417 23:38:18.830973 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c9ff59c-q46w4_calico-system(4a4f940d-c537-4a63-a007-4209639f0172)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c9ff59c-q46w4_calico-system(4a4f940d-c537-4a63-a007-4209639f0172)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c9ff59c-q46w4" podUID="4a4f940d-c537-4a63-a007-4209639f0172" Apr 17 23:38:18.978955 systemd[1]: Created slice kubepods-besteffort-pod36a96baf_e8bf_431a_9ddf_e9ecc28a4802.slice - libcontainer container kubepods-besteffort-pod36a96baf_e8bf_431a_9ddf_e9ecc28a4802.slice. Apr 17 23:38:18.982810 containerd[1982]: time="2026-04-17T23:38:18.982437643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v54pl,Uid:36a96baf-e8bf-431a-9ddf-e9ecc28a4802,Namespace:calico-system,Attempt:0,}" Apr 17 23:38:19.066567 containerd[1982]: time="2026-04-17T23:38:19.065794988Z" level=error msg="Failed to destroy network for sandbox \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:19.066567 containerd[1982]: time="2026-04-17T23:38:19.066208444Z" level=error msg="encountered an error cleaning up failed sandbox \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:19.066567 containerd[1982]: time="2026-04-17T23:38:19.066379942Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v54pl,Uid:36a96baf-e8bf-431a-9ddf-e9ecc28a4802,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:19.066845 kubelet[3213]: E0417 23:38:19.066789 3213 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:19.066907 kubelet[3213]: E0417 23:38:19.066872 3213 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v54pl" Apr 17 23:38:19.066966 kubelet[3213]: E0417 23:38:19.066918 3213 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v54pl" Apr 17 23:38:19.067073 kubelet[3213]: E0417 23:38:19.067020 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v54pl_calico-system(36a96baf-e8bf-431a-9ddf-e9ecc28a4802)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v54pl_calico-system(36a96baf-e8bf-431a-9ddf-e9ecc28a4802)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v54pl" podUID="36a96baf-e8bf-431a-9ddf-e9ecc28a4802" Apr 17 23:38:19.203510 kubelet[3213]: I0417 23:38:19.203460 3213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Apr 17 23:38:19.206098 kubelet[3213]: I0417 23:38:19.206018 3213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Apr 17 23:38:19.213561 kubelet[3213]: I0417 23:38:19.213438 3213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Apr 17 23:38:19.216192 kubelet[3213]: I0417 23:38:19.216164 3213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Apr 17 23:38:19.219532 kubelet[3213]: I0417 23:38:19.219493 3213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Apr 17 23:38:19.241462 containerd[1982]: time="2026-04-17T23:38:19.241416104Z" level=info msg="StopPodSandbox for \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\"" Apr 17 23:38:19.243785 containerd[1982]: time="2026-04-17T23:38:19.243748189Z" level=info msg="Ensure that sandbox 409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9 in task-service has been cleanup successfully" Apr 17 23:38:19.249350 containerd[1982]: time="2026-04-17T23:38:19.248811134Z" level=info msg="StopPodSandbox for \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\"" Apr 17 23:38:19.249350 containerd[1982]: time="2026-04-17T23:38:19.249042906Z" level=info msg="Ensure that sandbox 8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4 in task-service has been cleanup successfully" Apr 17 23:38:19.250029 containerd[1982]: time="2026-04-17T23:38:19.249870926Z" level=info msg="StopPodSandbox for \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\"" Apr 17 23:38:19.250427 containerd[1982]: time="2026-04-17T23:38:19.250405122Z" level=info msg="Ensure that sandbox 06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb in task-service has been cleanup successfully" Apr 17 23:38:19.259014 containerd[1982]: time="2026-04-17T23:38:19.255512920Z" level=info msg="StopPodSandbox for \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\"" Apr 17 23:38:19.259014 containerd[1982]: time="2026-04-17T23:38:19.257501631Z" level=info msg="Ensure that sandbox cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58 in task-service has been cleanup successfully" Apr 17 23:38:19.263721 containerd[1982]: time="2026-04-17T23:38:19.262126080Z" level=info msg="StopPodSandbox for \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\"" Apr 17 23:38:19.263721 containerd[1982]: time="2026-04-17T23:38:19.262350082Z" level=info msg="Ensure that sandbox e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11 in task-service has been cleanup successfully" Apr 17 23:38:19.285449 kubelet[3213]: I0417 23:38:19.282007 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tvjrn" podStartSLOduration=3.863163713 podStartE2EDuration="22.264333296s" podCreationTimestamp="2026-04-17 23:37:57 +0000 UTC" firstStartedPulling="2026-04-17 23:37:57.900490469 +0000 UTC m=+20.072803083" lastFinishedPulling="2026-04-17 23:38:16.301660052 +0000 UTC m=+38.473972666" observedRunningTime="2026-04-17 23:38:19.261793891 +0000 UTC m=+41.434106523" watchObservedRunningTime="2026-04-17 23:38:19.264333296 +0000 UTC m=+41.436645928" Apr 17 23:38:19.287887 kubelet[3213]: I0417 23:38:19.287839 3213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Apr 17 23:38:19.302946 containerd[1982]: time="2026-04-17T23:38:19.302904662Z" level=info msg="StopPodSandbox for \"714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4\"" Apr 17 23:38:19.316784 containerd[1982]: time="2026-04-17T23:38:19.314285649Z" level=info msg="Ensure that sandbox 714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4 in task-service has been cleanup successfully" Apr 17 23:38:19.354720 kubelet[3213]: I0417 23:38:19.353120 3213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Apr 17 23:38:19.354980 containerd[1982]: time="2026-04-17T23:38:19.354883950Z" level=info msg="StopPodSandbox for \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\"" Apr 17 23:38:19.396770 containerd[1982]: time="2026-04-17T23:38:19.396505009Z" level=info msg="Ensure that sandbox d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43 in task-service has been cleanup successfully" Apr 17 23:38:19.454187 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b-shm.mount: Deactivated successfully. Apr 17 23:38:19.466156 kubelet[3213]: I0417 23:38:19.466120 3213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Apr 17 23:38:19.512841 containerd[1982]: time="2026-04-17T23:38:19.507480537Z" level=info msg="StopPodSandbox for \"fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b\"" Apr 17 23:38:19.512841 containerd[1982]: time="2026-04-17T23:38:19.511135546Z" level=info msg="Ensure that sandbox fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b in task-service has been cleanup successfully" Apr 17 23:38:19.543632 containerd[1982]: time="2026-04-17T23:38:19.542085778Z" level=error msg="StopPodSandbox for \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\" failed" error="failed to destroy network for sandbox \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:19.544108 kubelet[3213]: E0417 23:38:19.542444 3213 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Apr 17 23:38:19.544108 kubelet[3213]: E0417 23:38:19.542505 3213 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb"} Apr 17 23:38:19.544108 kubelet[3213]: E0417 23:38:19.542574 3213 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:38:19.544108 kubelet[3213]: E0417 23:38:19.543376 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-hzpfm" podUID="ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac" Apr 17 23:38:19.550735 containerd[1982]: time="2026-04-17T23:38:19.550439695Z" level=error msg="StopPodSandbox for \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\" failed" error="failed to destroy network for sandbox \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:19.551045 kubelet[3213]: E0417 23:38:19.550994 3213 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Apr 17 23:38:19.551286 kubelet[3213]: E0417 23:38:19.551058 3213 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58"} Apr 17 23:38:19.551286 kubelet[3213]: E0417 23:38:19.551102 3213 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cec6c23d-ce9d-4878-9eb1-de1fc8bcdc4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:38:19.551286 kubelet[3213]: E0417 23:38:19.551149 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cec6c23d-ce9d-4878-9eb1-de1fc8bcdc4b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-84cf778744-tr4mm" podUID="cec6c23d-ce9d-4878-9eb1-de1fc8bcdc4b" Apr 17 23:38:19.655200 containerd[1982]: time="2026-04-17T23:38:19.655129918Z" level=error msg="StopPodSandbox for \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\" failed" error="failed to destroy network for sandbox \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:19.657618 containerd[1982]: time="2026-04-17T23:38:19.655373652Z" level=error msg="StopPodSandbox for \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\" failed" error="failed to destroy network for sandbox \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:19.657618 containerd[1982]: time="2026-04-17T23:38:19.655510242Z" level=error msg="StopPodSandbox for \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\" failed" error="failed to destroy network for sandbox \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 17 23:38:19.676205 kubelet[3213]: E0417 23:38:19.675866 3213 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Apr 17 23:38:19.676205 kubelet[3213]: E0417 23:38:19.675923 3213 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4"} Apr 17 23:38:19.676205 kubelet[3213]: E0417 23:38:19.675970 3213 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1f5613b0-e0ec-4ed6-ae4c-5a83ab1b043e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:38:19.676205 kubelet[3213]: E0417 23:38:19.676000 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1f5613b0-e0ec-4ed6-ae4c-5a83ab1b043e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-c8jqp" podUID="1f5613b0-e0ec-4ed6-ae4c-5a83ab1b043e" Apr 17 23:38:19.676595 kubelet[3213]: E0417 23:38:19.676041 3213 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Apr 17 23:38:19.676595 kubelet[3213]: E0417 23:38:19.676072 3213 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11"} Apr 17 23:38:19.676595 kubelet[3213]: E0417 23:38:19.676096 3213 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"36a96baf-e8bf-431a-9ddf-e9ecc28a4802\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:38:19.676595 kubelet[3213]: E0417 23:38:19.676121 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"36a96baf-e8bf-431a-9ddf-e9ecc28a4802\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v54pl" podUID="36a96baf-e8bf-431a-9ddf-e9ecc28a4802" Apr 17 23:38:19.676900 kubelet[3213]: E0417 23:38:19.676155 3213 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Apr 17 23:38:19.676900 kubelet[3213]: E0417 23:38:19.676174 3213 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9"} Apr 17 23:38:19.676900 kubelet[3213]: E0417 23:38:19.676197 3213 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1bc2dee4-01d4-4b29-b645-fc728a206f14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 17 23:38:19.676900 kubelet[3213]: E0417 23:38:19.676219 3213 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1bc2dee4-01d4-4b29-b645-fc728a206f14\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-84cf778744-ml2v7" podUID="1bc2dee4-01d4-4b29-b645-fc728a206f14" Apr 17 23:38:19.998393 containerd[1982]: 2026-04-17 23:38:19.802 [INFO][4510] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Apr 17 23:38:19.998393 containerd[1982]: 2026-04-17 23:38:19.802 [INFO][4510] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" iface="eth0" netns="/var/run/netns/cni-e94fe887-8f08-cd56-4e0f-1224bc7d8ce0" Apr 17 23:38:19.998393 containerd[1982]: 2026-04-17 23:38:19.803 [INFO][4510] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" iface="eth0" netns="/var/run/netns/cni-e94fe887-8f08-cd56-4e0f-1224bc7d8ce0" Apr 17 23:38:19.998393 containerd[1982]: 2026-04-17 23:38:19.803 [INFO][4510] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" iface="eth0" netns="/var/run/netns/cni-e94fe887-8f08-cd56-4e0f-1224bc7d8ce0" Apr 17 23:38:19.998393 containerd[1982]: 2026-04-17 23:38:19.803 [INFO][4510] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Apr 17 23:38:19.998393 containerd[1982]: 2026-04-17 23:38:19.803 [INFO][4510] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Apr 17 23:38:19.998393 containerd[1982]: 2026-04-17 23:38:19.958 [INFO][4576] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" HandleID="k8s-pod-network.714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0" Apr 17 23:38:19.998393 containerd[1982]: 2026-04-17 23:38:19.959 [INFO][4576] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:19.998393 containerd[1982]: 2026-04-17 23:38:19.959 [INFO][4576] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:19.998393 containerd[1982]: 2026-04-17 23:38:19.980 [WARNING][4576] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" HandleID="k8s-pod-network.714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0" Apr 17 23:38:19.998393 containerd[1982]: 2026-04-17 23:38:19.981 [INFO][4576] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" HandleID="k8s-pod-network.714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0" Apr 17 23:38:19.998393 containerd[1982]: 2026-04-17 23:38:19.982 [INFO][4576] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:19.998393 containerd[1982]: 2026-04-17 23:38:19.991 [INFO][4510] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Apr 17 23:38:20.001275 containerd[1982]: time="2026-04-17T23:38:19.999495158Z" level=info msg="TearDown network for sandbox \"714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4\" successfully" Apr 17 23:38:20.001275 containerd[1982]: time="2026-04-17T23:38:20.000572982Z" level=info msg="StopPodSandbox for \"714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4\" returns successfully" Apr 17 23:38:20.004231 systemd[1]: run-netns-cni\x2de94fe887\x2d8f08\x2dcd56\x2d4e0f\x2d1224bc7d8ce0.mount: Deactivated successfully. Apr 17 23:38:20.018018 containerd[1982]: time="2026-04-17T23:38:20.017974023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-glngr,Uid:ab6588a0-8817-4e9c-935c-c360a970fe47,Namespace:kube-system,Attempt:1,}" Apr 17 23:38:20.023515 containerd[1982]: 2026-04-17 23:38:19.797 [INFO][4552] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Apr 17 23:38:20.023515 containerd[1982]: 2026-04-17 23:38:19.797 [INFO][4552] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" iface="eth0" netns="/var/run/netns/cni-abc4c8f3-cd87-26b1-50b6-fa1b89930ae6" Apr 17 23:38:20.023515 containerd[1982]: 2026-04-17 23:38:19.798 [INFO][4552] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" iface="eth0" netns="/var/run/netns/cni-abc4c8f3-cd87-26b1-50b6-fa1b89930ae6" Apr 17 23:38:20.023515 containerd[1982]: 2026-04-17 23:38:19.799 [INFO][4552] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" iface="eth0" netns="/var/run/netns/cni-abc4c8f3-cd87-26b1-50b6-fa1b89930ae6" Apr 17 23:38:20.023515 containerd[1982]: 2026-04-17 23:38:19.799 [INFO][4552] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Apr 17 23:38:20.023515 containerd[1982]: 2026-04-17 23:38:19.799 [INFO][4552] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Apr 17 23:38:20.023515 containerd[1982]: 2026-04-17 23:38:19.964 [INFO][4574] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" HandleID="k8s-pod-network.fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Workload="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0" Apr 17 23:38:20.023515 containerd[1982]: 2026-04-17 23:38:19.967 [INFO][4574] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:20.023515 containerd[1982]: 2026-04-17 23:38:19.983 [INFO][4574] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:20.023515 containerd[1982]: 2026-04-17 23:38:20.009 [WARNING][4574] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" HandleID="k8s-pod-network.fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Workload="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0" Apr 17 23:38:20.023515 containerd[1982]: 2026-04-17 23:38:20.010 [INFO][4574] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" HandleID="k8s-pod-network.fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Workload="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0" Apr 17 23:38:20.023515 containerd[1982]: 2026-04-17 23:38:20.016 [INFO][4574] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:20.023515 containerd[1982]: 2026-04-17 23:38:20.021 [INFO][4552] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Apr 17 23:38:20.028623 systemd[1]: run-netns-cni\x2dabc4c8f3\x2dcd87\x2d26b1\x2d50b6\x2dfa1b89930ae6.mount: Deactivated successfully. Apr 17 23:38:20.029767 containerd[1982]: time="2026-04-17T23:38:20.029720841Z" level=info msg="TearDown network for sandbox \"fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b\" successfully" Apr 17 23:38:20.029983 containerd[1982]: time="2026-04-17T23:38:20.029851174Z" level=info msg="StopPodSandbox for \"fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b\" returns successfully" Apr 17 23:38:20.031680 containerd[1982]: time="2026-04-17T23:38:20.030859534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c9ff59c-q46w4,Uid:4a4f940d-c537-4a63-a007-4209639f0172,Namespace:calico-system,Attempt:1,}" Apr 17 23:38:20.042547 containerd[1982]: 2026-04-17 23:38:19.812 [INFO][4526] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Apr 17 23:38:20.042547 containerd[1982]: 2026-04-17 23:38:19.817 [INFO][4526] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" iface="eth0" netns="/var/run/netns/cni-9fbb0223-3a4d-b3c8-73ef-fa72a9aab6da" Apr 17 23:38:20.042547 containerd[1982]: 2026-04-17 23:38:19.821 [INFO][4526] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" iface="eth0" netns="/var/run/netns/cni-9fbb0223-3a4d-b3c8-73ef-fa72a9aab6da" Apr 17 23:38:20.042547 containerd[1982]: 2026-04-17 23:38:19.824 [INFO][4526] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" iface="eth0" netns="/var/run/netns/cni-9fbb0223-3a4d-b3c8-73ef-fa72a9aab6da" Apr 17 23:38:20.042547 containerd[1982]: 2026-04-17 23:38:19.824 [INFO][4526] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Apr 17 23:38:20.042547 containerd[1982]: 2026-04-17 23:38:19.824 [INFO][4526] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Apr 17 23:38:20.042547 containerd[1982]: 2026-04-17 23:38:19.979 [INFO][4584] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" HandleID="k8s-pod-network.d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:20.042547 containerd[1982]: 2026-04-17 23:38:19.980 [INFO][4584] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:20.042547 containerd[1982]: 2026-04-17 23:38:20.017 [INFO][4584] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:20.042547 containerd[1982]: 2026-04-17 23:38:20.029 [WARNING][4584] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" HandleID="k8s-pod-network.d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:20.042547 containerd[1982]: 2026-04-17 23:38:20.029 [INFO][4584] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" HandleID="k8s-pod-network.d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:20.042547 containerd[1982]: 2026-04-17 23:38:20.032 [INFO][4584] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:20.042547 containerd[1982]: 2026-04-17 23:38:20.035 [INFO][4526] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Apr 17 23:38:20.043336 containerd[1982]: time="2026-04-17T23:38:20.042778476Z" level=info msg="TearDown network for sandbox \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\" successfully" Apr 17 23:38:20.043336 containerd[1982]: time="2026-04-17T23:38:20.042811145Z" level=info msg="StopPodSandbox for \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\" returns successfully" Apr 17 23:38:20.044624 containerd[1982]: time="2026-04-17T23:38:20.044165085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65448f5dbb-ln8lt,Uid:d3c6abe4-044b-4f66-a78f-bda6e0004d81,Namespace:calico-system,Attempt:1,}" Apr 17 23:38:20.392911 systemd-networkd[1902]: cali3c79a7898c9: Link UP Apr 17 23:38:20.393171 systemd-networkd[1902]: cali3c79a7898c9: Gained carrier Apr 17 23:38:20.404238 (udev-worker)[4667]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.159 [ERROR][4608] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.186 [INFO][4608] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0 calico-kube-controllers-7c9ff59c- calico-system 4a4f940d-c537-4a63-a007-4209639f0172 912 0 2026-04-17 23:37:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c9ff59c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-16-109 calico-kube-controllers-7c9ff59c-q46w4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3c79a7898c9 [] [] }} ContainerID="84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f" Namespace="calico-system" Pod="calico-kube-controllers-7c9ff59c-q46w4" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-" Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.186 [INFO][4608] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f" Namespace="calico-system" Pod="calico-kube-controllers-7c9ff59c-q46w4" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0" Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.290 [INFO][4632] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f" HandleID="k8s-pod-network.84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f" Workload="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0" Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.311 [INFO][4632] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f" HandleID="k8s-pod-network.84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f" Workload="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00044bce0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-109", "pod":"calico-kube-controllers-7c9ff59c-q46w4", "timestamp":"2026-04-17 23:38:20.290648541 +0000 UTC"}, Hostname:"ip-172-31-16-109", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003a8dc0)} Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.312 [INFO][4632] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.312 [INFO][4632] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.312 [INFO][4632] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-109' Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.318 [INFO][4632] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f" host="ip-172-31-16-109" Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.329 [INFO][4632] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-109" Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.335 [INFO][4632] ipam/ipam.go 526: Trying affinity for 192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.337 [INFO][4632] ipam/ipam.go 160: Attempting to load block cidr=192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.339 [INFO][4632] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.340 [INFO][4632] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.2.192/26 handle="k8s-pod-network.84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f" host="ip-172-31-16-109" Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.342 [INFO][4632] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.348 [INFO][4632] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.2.192/26 handle="k8s-pod-network.84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f" host="ip-172-31-16-109" Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.355 [INFO][4632] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.2.193/26] block=192.168.2.192/26 handle="k8s-pod-network.84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f" host="ip-172-31-16-109" Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.355 [INFO][4632] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.2.193/26] handle="k8s-pod-network.84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f" host="ip-172-31-16-109" Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.355 [INFO][4632] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:20.420650 containerd[1982]: 2026-04-17 23:38:20.355 [INFO][4632] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.2.193/26] IPv6=[] ContainerID="84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f" HandleID="k8s-pod-network.84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f" Workload="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0" Apr 17 23:38:20.424471 containerd[1982]: 2026-04-17 23:38:20.359 [INFO][4608] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f" Namespace="calico-system" Pod="calico-kube-controllers-7c9ff59c-q46w4" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0", GenerateName:"calico-kube-controllers-7c9ff59c-", Namespace:"calico-system", SelfLink:"", UID:"4a4f940d-c537-4a63-a007-4209639f0172", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c9ff59c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"", Pod:"calico-kube-controllers-7c9ff59c-q46w4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3c79a7898c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:20.424471 containerd[1982]: 2026-04-17 23:38:20.359 [INFO][4608] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.193/32] ContainerID="84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f" Namespace="calico-system" Pod="calico-kube-controllers-7c9ff59c-q46w4" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0" Apr 17 23:38:20.424471 containerd[1982]: 2026-04-17 23:38:20.359 [INFO][4608] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3c79a7898c9 ContainerID="84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f" Namespace="calico-system" Pod="calico-kube-controllers-7c9ff59c-q46w4" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0" Apr 17 23:38:20.424471 containerd[1982]: 2026-04-17 23:38:20.387 [INFO][4608] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f" Namespace="calico-system" Pod="calico-kube-controllers-7c9ff59c-q46w4" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0" Apr 17 23:38:20.424471 containerd[1982]: 2026-04-17 23:38:20.387 [INFO][4608] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f" Namespace="calico-system" Pod="calico-kube-controllers-7c9ff59c-q46w4" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0", GenerateName:"calico-kube-controllers-7c9ff59c-", Namespace:"calico-system", SelfLink:"", UID:"4a4f940d-c537-4a63-a007-4209639f0172", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c9ff59c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f", Pod:"calico-kube-controllers-7c9ff59c-q46w4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3c79a7898c9", MAC:"0a:82:6d:b3:58:4b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:20.424471 containerd[1982]: 2026-04-17 23:38:20.413 [INFO][4608] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f" Namespace="calico-system" Pod="calico-kube-controllers-7c9ff59c-q46w4" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0" Apr 17 23:38:20.469602 systemd[1]: run-netns-cni\x2d9fbb0223\x2d3a4d\x2db3c8\x2d73ef\x2dfa72a9aab6da.mount: Deactivated successfully. Apr 17 23:38:20.513850 containerd[1982]: time="2026-04-17T23:38:20.512929536Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:20.513850 containerd[1982]: time="2026-04-17T23:38:20.513051312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:20.513850 containerd[1982]: time="2026-04-17T23:38:20.513096303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:20.513850 containerd[1982]: time="2026-04-17T23:38:20.513264276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:20.532047 (udev-worker)[4666]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:38:20.542255 systemd-networkd[1902]: calid0b548b7bc8: Link UP Apr 17 23:38:20.543505 systemd-networkd[1902]: calid0b548b7bc8: Gained carrier Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.171 [ERROR][4619] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.213 [INFO][4619] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0 whisker-65448f5dbb- calico-system d3c6abe4-044b-4f66-a78f-bda6e0004d81 914 0 2026-04-17 23:38:00 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65448f5dbb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-16-109 whisker-65448f5dbb-ln8lt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid0b548b7bc8 [] [] }} ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Namespace="calico-system" Pod="whisker-65448f5dbb-ln8lt" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-" Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.213 [INFO][4619] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Namespace="calico-system" Pod="whisker-65448f5dbb-ln8lt" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.300 [INFO][4643] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" HandleID="k8s-pod-network.c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.316 [INFO][4643] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" HandleID="k8s-pod-network.c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ff10), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-109", "pod":"whisker-65448f5dbb-ln8lt", "timestamp":"2026-04-17 23:38:20.300586938 +0000 UTC"}, Hostname:"ip-172-31-16-109", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000e6160)} Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.316 [INFO][4643] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.356 [INFO][4643] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.356 [INFO][4643] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-109' Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.422 [INFO][4643] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" host="ip-172-31-16-109" Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.432 [INFO][4643] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-109" Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.447 [INFO][4643] ipam/ipam.go 526: Trying affinity for 192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.453 [INFO][4643] ipam/ipam.go 160: Attempting to load block cidr=192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.460 [INFO][4643] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.461 [INFO][4643] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.2.192/26 handle="k8s-pod-network.c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" host="ip-172-31-16-109" Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.465 [INFO][4643] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.484 [INFO][4643] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.2.192/26 handle="k8s-pod-network.c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" host="ip-172-31-16-109" Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.507 [INFO][4643] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.2.194/26] block=192.168.2.192/26 handle="k8s-pod-network.c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" host="ip-172-31-16-109" Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.507 [INFO][4643] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.2.194/26] handle="k8s-pod-network.c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" host="ip-172-31-16-109" Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.507 [INFO][4643] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:20.591949 containerd[1982]: 2026-04-17 23:38:20.507 [INFO][4643] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.2.194/26] IPv6=[] ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" HandleID="k8s-pod-network.c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:20.596202 containerd[1982]: 2026-04-17 23:38:20.519 [INFO][4619] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Namespace="calico-system" Pod="whisker-65448f5dbb-ln8lt" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0", GenerateName:"whisker-65448f5dbb-", Namespace:"calico-system", SelfLink:"", UID:"d3c6abe4-044b-4f66-a78f-bda6e0004d81", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65448f5dbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"", Pod:"whisker-65448f5dbb-ln8lt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid0b548b7bc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:20.596202 containerd[1982]: 2026-04-17 23:38:20.519 [INFO][4619] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.194/32] ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Namespace="calico-system" Pod="whisker-65448f5dbb-ln8lt" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:20.596202 containerd[1982]: 2026-04-17 23:38:20.519 [INFO][4619] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0b548b7bc8 ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Namespace="calico-system" Pod="whisker-65448f5dbb-ln8lt" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:20.596202 containerd[1982]: 2026-04-17 23:38:20.544 [INFO][4619] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Namespace="calico-system" Pod="whisker-65448f5dbb-ln8lt" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:20.596202 containerd[1982]: 2026-04-17 23:38:20.545 [INFO][4619] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Namespace="calico-system" Pod="whisker-65448f5dbb-ln8lt" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0", GenerateName:"whisker-65448f5dbb-", Namespace:"calico-system", SelfLink:"", UID:"d3c6abe4-044b-4f66-a78f-bda6e0004d81", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65448f5dbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b", Pod:"whisker-65448f5dbb-ln8lt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid0b548b7bc8", MAC:"16:99:8d:75:bc:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:20.596202 containerd[1982]: 2026-04-17 23:38:20.583 [INFO][4619] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Namespace="calico-system" Pod="whisker-65448f5dbb-ln8lt" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:20.626516 systemd[1]: Started cri-containerd-84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f.scope - libcontainer container 84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f. Apr 17 23:38:20.638848 systemd-networkd[1902]: cali70aefc878c2: Link UP Apr 17 23:38:20.640207 systemd-networkd[1902]: cali70aefc878c2: Gained carrier Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.158 [ERROR][4598] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.194 [INFO][4598] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0 coredns-674b8bbfcf- kube-system ab6588a0-8817-4e9c-935c-c360a970fe47 913 0 2026-04-17 23:37:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-109 coredns-674b8bbfcf-glngr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali70aefc878c2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-glngr" WorkloadEndpoint="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-" Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.196 [INFO][4598] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-glngr" WorkloadEndpoint="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0" Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.298 [INFO][4637] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1" HandleID="k8s-pod-network.0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0" Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.315 [INFO][4637] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1" HandleID="k8s-pod-network.0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123910), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-109", "pod":"coredns-674b8bbfcf-glngr", "timestamp":"2026-04-17 23:38:20.298289505 +0000 UTC"}, Hostname:"ip-172-31-16-109", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188f20)} Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.316 [INFO][4637] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.507 [INFO][4637] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.507 [INFO][4637] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-109' Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.521 [INFO][4637] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1" host="ip-172-31-16-109" Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.545 [INFO][4637] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-109" Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.565 [INFO][4637] ipam/ipam.go 526: Trying affinity for 192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.575 [INFO][4637] ipam/ipam.go 160: Attempting to load block cidr=192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.582 [INFO][4637] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.582 [INFO][4637] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.2.192/26 handle="k8s-pod-network.0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1" host="ip-172-31-16-109" Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.585 [INFO][4637] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1 Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.595 [INFO][4637] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.2.192/26 handle="k8s-pod-network.0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1" host="ip-172-31-16-109" Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.614 [INFO][4637] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.2.195/26] block=192.168.2.192/26 handle="k8s-pod-network.0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1" host="ip-172-31-16-109" Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.614 [INFO][4637] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.2.195/26] handle="k8s-pod-network.0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1" host="ip-172-31-16-109" Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.614 [INFO][4637] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:20.721841 containerd[1982]: 2026-04-17 23:38:20.614 [INFO][4637] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.2.195/26] IPv6=[] ContainerID="0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1" HandleID="k8s-pod-network.0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0" Apr 17 23:38:20.725046 containerd[1982]: 2026-04-17 23:38:20.621 [INFO][4598] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-glngr" WorkloadEndpoint="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ab6588a0-8817-4e9c-935c-c360a970fe47", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"", Pod:"coredns-674b8bbfcf-glngr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70aefc878c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:20.725046 containerd[1982]: 2026-04-17 23:38:20.621 [INFO][4598] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.195/32] ContainerID="0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-glngr" WorkloadEndpoint="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0" Apr 17 23:38:20.725046 containerd[1982]: 2026-04-17 23:38:20.622 [INFO][4598] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70aefc878c2 ContainerID="0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-glngr" WorkloadEndpoint="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0" Apr 17 23:38:20.725046 containerd[1982]: 2026-04-17 23:38:20.643 [INFO][4598] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-glngr" WorkloadEndpoint="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0" Apr 17 23:38:20.725046 containerd[1982]: 2026-04-17 23:38:20.646 [INFO][4598] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-glngr" WorkloadEndpoint="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ab6588a0-8817-4e9c-935c-c360a970fe47", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1", Pod:"coredns-674b8bbfcf-glngr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70aefc878c2", MAC:"fa:72:0f:83:e5:88", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:20.725046 containerd[1982]: 2026-04-17 23:38:20.710 [INFO][4598] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1" Namespace="kube-system" Pod="coredns-674b8bbfcf-glngr" WorkloadEndpoint="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0" Apr 17 23:38:20.728115 containerd[1982]: time="2026-04-17T23:38:20.722521543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:20.728115 containerd[1982]: time="2026-04-17T23:38:20.722616438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:20.728115 containerd[1982]: time="2026-04-17T23:38:20.722639114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:20.728115 containerd[1982]: time="2026-04-17T23:38:20.722793001Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:20.787950 systemd[1]: Started cri-containerd-c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b.scope - libcontainer container c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b. Apr 17 23:38:20.804471 containerd[1982]: time="2026-04-17T23:38:20.803486988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:20.804471 containerd[1982]: time="2026-04-17T23:38:20.803573473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:20.804471 containerd[1982]: time="2026-04-17T23:38:20.803590399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:20.804471 containerd[1982]: time="2026-04-17T23:38:20.803717955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:20.842934 systemd[1]: Started cri-containerd-0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1.scope - libcontainer container 0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1. Apr 17 23:38:20.942253 containerd[1982]: time="2026-04-17T23:38:20.942130705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-glngr,Uid:ab6588a0-8817-4e9c-935c-c360a970fe47,Namespace:kube-system,Attempt:1,} returns sandbox id \"0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1\"" Apr 17 23:38:20.969281 containerd[1982]: time="2026-04-17T23:38:20.969239616Z" level=info msg="CreateContainer within sandbox \"0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:38:21.006090 containerd[1982]: time="2026-04-17T23:38:21.005581716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c9ff59c-q46w4,Uid:4a4f940d-c537-4a63-a007-4209639f0172,Namespace:calico-system,Attempt:1,} returns sandbox id \"84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f\"" Apr 17 23:38:21.011074 containerd[1982]: time="2026-04-17T23:38:21.010811131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 17 23:38:21.028060 containerd[1982]: time="2026-04-17T23:38:21.026679289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65448f5dbb-ln8lt,Uid:d3c6abe4-044b-4f66-a78f-bda6e0004d81,Namespace:calico-system,Attempt:1,} returns sandbox id \"c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b\"" Apr 17 23:38:21.047483 containerd[1982]: time="2026-04-17T23:38:21.047436415Z" level=info msg="CreateContainer within sandbox \"0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"61968990fc524e8851ea96fa8fea31d1ce13d79e002a577273455482d5830a74\"" Apr 17 23:38:21.049292 containerd[1982]: time="2026-04-17T23:38:21.049256643Z" level=info msg="StartContainer for \"61968990fc524e8851ea96fa8fea31d1ce13d79e002a577273455482d5830a74\"" Apr 17 23:38:21.085975 systemd[1]: Started cri-containerd-61968990fc524e8851ea96fa8fea31d1ce13d79e002a577273455482d5830a74.scope - libcontainer container 61968990fc524e8851ea96fa8fea31d1ce13d79e002a577273455482d5830a74. Apr 17 23:38:21.125967 containerd[1982]: time="2026-04-17T23:38:21.125921128Z" level=info msg="StartContainer for \"61968990fc524e8851ea96fa8fea31d1ce13d79e002a577273455482d5830a74\" returns successfully" Apr 17 23:38:21.560516 kubelet[3213]: I0417 23:38:21.559909 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-glngr" podStartSLOduration=37.559887014 podStartE2EDuration="37.559887014s" podCreationTimestamp="2026-04-17 23:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:38:21.559348599 +0000 UTC m=+43.731661250" watchObservedRunningTime="2026-04-17 23:38:21.559887014 +0000 UTC m=+43.732199647" Apr 17 23:38:21.998239 systemd-networkd[1902]: calid0b548b7bc8: Gained IPv6LL Apr 17 23:38:22.219880 kernel: calico-node[4981]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 17 23:38:22.248899 systemd-networkd[1902]: cali3c79a7898c9: Gained IPv6LL Apr 17 23:38:22.635638 systemd-networkd[1902]: cali70aefc878c2: Gained IPv6LL Apr 17 23:38:23.544975 systemd-networkd[1902]: vxlan.calico: Link UP Apr 17 23:38:23.544985 systemd-networkd[1902]: vxlan.calico: Gained carrier Apr 17 23:38:23.636208 (udev-worker)[5042]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:38:24.744795 systemd-networkd[1902]: vxlan.calico: Gained IPv6LL Apr 17 23:38:24.767104 containerd[1982]: time="2026-04-17T23:38:24.766102496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 17 23:38:24.775754 containerd[1982]: time="2026-04-17T23:38:24.772234804Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.761375094s" Apr 17 23:38:24.775754 containerd[1982]: time="2026-04-17T23:38:24.772290123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 17 23:38:24.792534 containerd[1982]: time="2026-04-17T23:38:24.792028108Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 17 23:38:24.803026 containerd[1982]: time="2026-04-17T23:38:24.799948866Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:24.803026 containerd[1982]: time="2026-04-17T23:38:24.801628587Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:24.803026 containerd[1982]: time="2026-04-17T23:38:24.802999730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:24.937389 containerd[1982]: time="2026-04-17T23:38:24.937325676Z" level=info msg="CreateContainer within sandbox \"84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 17 23:38:24.965602 containerd[1982]: time="2026-04-17T23:38:24.965540339Z" level=info msg="CreateContainer within sandbox \"84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"928805af13dae477ab48381d3d48cde0b30dcb395f6a36220baf0fdcf24e5584\"" Apr 17 23:38:24.966391 containerd[1982]: time="2026-04-17T23:38:24.966297329Z" level=info msg="StartContainer for \"928805af13dae477ab48381d3d48cde0b30dcb395f6a36220baf0fdcf24e5584\"" Apr 17 23:38:25.215961 systemd[1]: Started cri-containerd-928805af13dae477ab48381d3d48cde0b30dcb395f6a36220baf0fdcf24e5584.scope - libcontainer container 928805af13dae477ab48381d3d48cde0b30dcb395f6a36220baf0fdcf24e5584. Apr 17 23:38:25.295014 containerd[1982]: time="2026-04-17T23:38:25.294970207Z" level=info msg="StartContainer for \"928805af13dae477ab48381d3d48cde0b30dcb395f6a36220baf0fdcf24e5584\" returns successfully" Apr 17 23:38:25.794241 kubelet[3213]: I0417 23:38:25.761815 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7c9ff59c-q46w4" podStartSLOduration=24.972992671 podStartE2EDuration="28.748211417s" podCreationTimestamp="2026-04-17 23:37:57 +0000 UTC" firstStartedPulling="2026-04-17 23:38:21.010222273 +0000 UTC m=+43.182534891" lastFinishedPulling="2026-04-17 23:38:24.785441004 +0000 UTC m=+46.957753637" observedRunningTime="2026-04-17 23:38:25.687538425 +0000 UTC m=+47.859851057" watchObservedRunningTime="2026-04-17 23:38:25.748211417 +0000 UTC m=+47.920524049" Apr 17 23:38:26.147480 containerd[1982]: time="2026-04-17T23:38:26.147422732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:26.149434 containerd[1982]: time="2026-04-17T23:38:26.148828862Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 17 23:38:26.150001 containerd[1982]: time="2026-04-17T23:38:26.149941890Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:26.153717 containerd[1982]: time="2026-04-17T23:38:26.153647532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:26.154611 containerd[1982]: time="2026-04-17T23:38:26.154408015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.36233063s" Apr 17 23:38:26.154611 containerd[1982]: time="2026-04-17T23:38:26.154449654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 17 23:38:26.160821 containerd[1982]: time="2026-04-17T23:38:26.160774229Z" level=info msg="CreateContainer within sandbox \"c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:38:26.177019 containerd[1982]: time="2026-04-17T23:38:26.176979043Z" level=info msg="CreateContainer within sandbox \"c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"9b571fcace6a47049e75e0b55e65d93ddccf285c6e20edea8914d24a45b81fbd\"" Apr 17 23:38:26.178205 containerd[1982]: time="2026-04-17T23:38:26.177982266Z" level=info msg="StartContainer for \"9b571fcace6a47049e75e0b55e65d93ddccf285c6e20edea8914d24a45b81fbd\"" Apr 17 23:38:26.216827 systemd[1]: run-containerd-runc-k8s.io-9b571fcace6a47049e75e0b55e65d93ddccf285c6e20edea8914d24a45b81fbd-runc.mZcyD3.mount: Deactivated successfully. Apr 17 23:38:26.230986 systemd[1]: Started cri-containerd-9b571fcace6a47049e75e0b55e65d93ddccf285c6e20edea8914d24a45b81fbd.scope - libcontainer container 9b571fcace6a47049e75e0b55e65d93ddccf285c6e20edea8914d24a45b81fbd. Apr 17 23:38:26.287259 containerd[1982]: time="2026-04-17T23:38:26.287211724Z" level=info msg="StartContainer for \"9b571fcace6a47049e75e0b55e65d93ddccf285c6e20edea8914d24a45b81fbd\" returns successfully" Apr 17 23:38:26.290329 containerd[1982]: time="2026-04-17T23:38:26.290284005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 17 23:38:27.060244 ntpd[1954]: Listen normally on 7 vxlan.calico 192.168.2.192:123 Apr 17 23:38:27.060979 ntpd[1954]: 17 Apr 23:38:27 ntpd[1954]: Listen normally on 7 vxlan.calico 192.168.2.192:123 Apr 17 23:38:27.060979 ntpd[1954]: 17 Apr 23:38:27 ntpd[1954]: Listen normally on 8 cali3c79a7898c9 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 17 23:38:27.060979 ntpd[1954]: 17 Apr 23:38:27 ntpd[1954]: Listen normally on 9 calid0b548b7bc8 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 17 23:38:27.060979 ntpd[1954]: 17 Apr 23:38:27 ntpd[1954]: Listen normally on 10 cali70aefc878c2 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 17 23:38:27.060979 ntpd[1954]: 17 Apr 23:38:27 ntpd[1954]: Listen normally on 11 vxlan.calico [fe80::64b5:7eff:fe14:2956%7]:123 Apr 17 23:38:27.060371 ntpd[1954]: Listen normally on 8 cali3c79a7898c9 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 17 23:38:27.060436 ntpd[1954]: Listen normally on 9 calid0b548b7bc8 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 17 23:38:27.060477 ntpd[1954]: Listen normally on 10 cali70aefc878c2 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 17 23:38:27.060528 ntpd[1954]: Listen normally on 11 vxlan.calico [fe80::64b5:7eff:fe14:2956%7]:123 Apr 17 23:38:27.889861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4176889498.mount: Deactivated successfully. Apr 17 23:38:27.927181 containerd[1982]: time="2026-04-17T23:38:27.927130428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:27.929277 containerd[1982]: time="2026-04-17T23:38:27.929065043Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 17 23:38:27.932524 containerd[1982]: time="2026-04-17T23:38:27.931576947Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:27.935175 containerd[1982]: time="2026-04-17T23:38:27.935136592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:27.936723 containerd[1982]: time="2026-04-17T23:38:27.936071590Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.645728287s" Apr 17 23:38:27.936723 containerd[1982]: time="2026-04-17T23:38:27.936110850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 17 23:38:27.943236 containerd[1982]: time="2026-04-17T23:38:27.943197349Z" level=info msg="CreateContainer within sandbox \"c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:38:27.969039 containerd[1982]: time="2026-04-17T23:38:27.968993466Z" level=info msg="CreateContainer within sandbox \"c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"3a967770374769b64bd24b7e8e0f42fc5c31604dfde90eaebcc401b37ab234b3\"" Apr 17 23:38:27.969857 containerd[1982]: time="2026-04-17T23:38:27.969825593Z" level=info msg="StartContainer for \"3a967770374769b64bd24b7e8e0f42fc5c31604dfde90eaebcc401b37ab234b3\"" Apr 17 23:38:28.014136 systemd[1]: Started cri-containerd-3a967770374769b64bd24b7e8e0f42fc5c31604dfde90eaebcc401b37ab234b3.scope - libcontainer container 3a967770374769b64bd24b7e8e0f42fc5c31604dfde90eaebcc401b37ab234b3. Apr 17 23:38:28.070926 containerd[1982]: time="2026-04-17T23:38:28.070836647Z" level=info msg="StartContainer for \"3a967770374769b64bd24b7e8e0f42fc5c31604dfde90eaebcc401b37ab234b3\" returns successfully" Apr 17 23:38:28.787743 containerd[1982]: time="2026-04-17T23:38:28.787628589Z" level=info msg="StopContainer for \"3a967770374769b64bd24b7e8e0f42fc5c31604dfde90eaebcc401b37ab234b3\" with timeout 30 (s)" Apr 17 23:38:28.790879 containerd[1982]: time="2026-04-17T23:38:28.790732376Z" level=info msg="Stop container \"3a967770374769b64bd24b7e8e0f42fc5c31604dfde90eaebcc401b37ab234b3\" with signal terminated" Apr 17 23:38:28.792454 containerd[1982]: time="2026-04-17T23:38:28.792148713Z" level=info msg="StopContainer for \"9b571fcace6a47049e75e0b55e65d93ddccf285c6e20edea8914d24a45b81fbd\" with timeout 30 (s)" Apr 17 23:38:28.792881 containerd[1982]: time="2026-04-17T23:38:28.792853816Z" level=info msg="Stop container \"9b571fcace6a47049e75e0b55e65d93ddccf285c6e20edea8914d24a45b81fbd\" with signal terminated" Apr 17 23:38:28.803404 systemd[1]: cri-containerd-3a967770374769b64bd24b7e8e0f42fc5c31604dfde90eaebcc401b37ab234b3.scope: Deactivated successfully. Apr 17 23:38:28.810184 systemd[1]: cri-containerd-9b571fcace6a47049e75e0b55e65d93ddccf285c6e20edea8914d24a45b81fbd.scope: Deactivated successfully. Apr 17 23:38:28.845755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a967770374769b64bd24b7e8e0f42fc5c31604dfde90eaebcc401b37ab234b3-rootfs.mount: Deactivated successfully. Apr 17 23:38:28.845882 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b571fcace6a47049e75e0b55e65d93ddccf285c6e20edea8914d24a45b81fbd-rootfs.mount: Deactivated successfully. Apr 17 23:38:29.005225 containerd[1982]: time="2026-04-17T23:38:28.997079107Z" level=info msg="shim disconnected" id=3a967770374769b64bd24b7e8e0f42fc5c31604dfde90eaebcc401b37ab234b3 namespace=k8s.io Apr 17 23:38:29.005843 containerd[1982]: time="2026-04-17T23:38:29.005237010Z" level=warning msg="cleaning up after shim disconnected" id=3a967770374769b64bd24b7e8e0f42fc5c31604dfde90eaebcc401b37ab234b3 namespace=k8s.io Apr 17 23:38:29.005843 containerd[1982]: time="2026-04-17T23:38:28.997170719Z" level=info msg="shim disconnected" id=9b571fcace6a47049e75e0b55e65d93ddccf285c6e20edea8914d24a45b81fbd namespace=k8s.io Apr 17 23:38:29.005843 containerd[1982]: time="2026-04-17T23:38:29.005326078Z" level=warning msg="cleaning up after shim disconnected" id=9b571fcace6a47049e75e0b55e65d93ddccf285c6e20edea8914d24a45b81fbd namespace=k8s.io Apr 17 23:38:29.005843 containerd[1982]: time="2026-04-17T23:38:29.005260447Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:29.006911 containerd[1982]: time="2026-04-17T23:38:29.006731316Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:29.032903 containerd[1982]: time="2026-04-17T23:38:29.031817777Z" level=warning msg="cleanup warnings time=\"2026-04-17T23:38:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 17 23:38:29.038494 containerd[1982]: time="2026-04-17T23:38:29.038381090Z" level=info msg="StopContainer for \"3a967770374769b64bd24b7e8e0f42fc5c31604dfde90eaebcc401b37ab234b3\" returns successfully" Apr 17 23:38:29.057071 containerd[1982]: time="2026-04-17T23:38:29.056875906Z" level=info msg="StopContainer for \"9b571fcace6a47049e75e0b55e65d93ddccf285c6e20edea8914d24a45b81fbd\" returns successfully" Apr 17 23:38:29.061769 containerd[1982]: time="2026-04-17T23:38:29.061717363Z" level=info msg="StopPodSandbox for \"c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b\"" Apr 17 23:38:29.066661 containerd[1982]: time="2026-04-17T23:38:29.066598404Z" level=info msg="Container to stop \"3a967770374769b64bd24b7e8e0f42fc5c31604dfde90eaebcc401b37ab234b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:38:29.066661 containerd[1982]: time="2026-04-17T23:38:29.066647327Z" level=info msg="Container to stop \"9b571fcace6a47049e75e0b55e65d93ddccf285c6e20edea8914d24a45b81fbd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 17 23:38:29.072966 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b-shm.mount: Deactivated successfully. Apr 17 23:38:29.079297 systemd[1]: cri-containerd-c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b.scope: Deactivated successfully. Apr 17 23:38:29.110915 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b-rootfs.mount: Deactivated successfully. Apr 17 23:38:29.124044 containerd[1982]: time="2026-04-17T23:38:29.122855232Z" level=info msg="shim disconnected" id=c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b namespace=k8s.io Apr 17 23:38:29.124044 containerd[1982]: time="2026-04-17T23:38:29.123607271Z" level=warning msg="cleaning up after shim disconnected" id=c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b namespace=k8s.io Apr 17 23:38:29.124044 containerd[1982]: time="2026-04-17T23:38:29.123624614Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:38:29.337122 kubelet[3213]: I0417 23:38:29.336101 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-65448f5dbb-ln8lt" podStartSLOduration=22.433143776 podStartE2EDuration="29.336077155s" podCreationTimestamp="2026-04-17 23:38:00 +0000 UTC" firstStartedPulling="2026-04-17 23:38:21.03425553 +0000 UTC m=+43.206568139" lastFinishedPulling="2026-04-17 23:38:27.937188895 +0000 UTC m=+50.109501518" observedRunningTime="2026-04-17 23:38:28.746842832 +0000 UTC m=+50.919155464" watchObservedRunningTime="2026-04-17 23:38:29.336077155 +0000 UTC m=+51.508389786" Apr 17 23:38:29.340797 systemd-networkd[1902]: calid0b548b7bc8: Link DOWN Apr 17 23:38:29.340902 systemd-networkd[1902]: calid0b548b7bc8: Lost carrier Apr 17 23:38:29.589740 containerd[1982]: 2026-04-17 23:38:29.335 [INFO][5372] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Apr 17 23:38:29.589740 containerd[1982]: 2026-04-17 23:38:29.338 [INFO][5372] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" iface="eth0" netns="/var/run/netns/cni-987c9c85-6728-3c2f-8651-9f31c7fe75a3" Apr 17 23:38:29.589740 containerd[1982]: 2026-04-17 23:38:29.339 [INFO][5372] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" iface="eth0" netns="/var/run/netns/cni-987c9c85-6728-3c2f-8651-9f31c7fe75a3" Apr 17 23:38:29.589740 containerd[1982]: 2026-04-17 23:38:29.353 [INFO][5372] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" after=14.508995ms iface="eth0" netns="/var/run/netns/cni-987c9c85-6728-3c2f-8651-9f31c7fe75a3" Apr 17 23:38:29.589740 containerd[1982]: 2026-04-17 23:38:29.353 [INFO][5372] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Apr 17 23:38:29.589740 containerd[1982]: 2026-04-17 23:38:29.353 [INFO][5372] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Apr 17 23:38:29.589740 containerd[1982]: 2026-04-17 23:38:29.541 [INFO][5382] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" HandleID="k8s-pod-network.c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:29.589740 containerd[1982]: 2026-04-17 23:38:29.541 [INFO][5382] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:29.589740 containerd[1982]: 2026-04-17 23:38:29.541 [INFO][5382] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:29.589740 containerd[1982]: 2026-04-17 23:38:29.580 [INFO][5382] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" HandleID="k8s-pod-network.c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:29.589740 containerd[1982]: 2026-04-17 23:38:29.580 [INFO][5382] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" HandleID="k8s-pod-network.c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:29.589740 containerd[1982]: 2026-04-17 23:38:29.582 [INFO][5382] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:29.589740 containerd[1982]: 2026-04-17 23:38:29.585 [INFO][5372] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Apr 17 23:38:29.594592 systemd[1]: run-netns-cni\x2d987c9c85\x2d6728\x2d3c2f\x2d8651\x2d9f31c7fe75a3.mount: Deactivated successfully. Apr 17 23:38:29.602611 containerd[1982]: time="2026-04-17T23:38:29.602548950Z" level=info msg="TearDown network for sandbox \"c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b\" successfully" Apr 17 23:38:29.602611 containerd[1982]: time="2026-04-17T23:38:29.602599639Z" level=info msg="StopPodSandbox for \"c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b\" returns successfully" Apr 17 23:38:29.603271 containerd[1982]: time="2026-04-17T23:38:29.603216024Z" level=info msg="StopPodSandbox for \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\"" Apr 17 23:38:29.689746 kubelet[3213]: I0417 23:38:29.689671 3213 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Apr 17 23:38:29.708574 containerd[1982]: 2026-04-17 23:38:29.663 [WARNING][5423] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0", GenerateName:"whisker-65448f5dbb-", Namespace:"calico-system", SelfLink:"", UID:"d3c6abe4-044b-4f66-a78f-bda6e0004d81", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65448f5dbb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b", Pod:"whisker-65448f5dbb-ln8lt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid0b548b7bc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:29.708574 containerd[1982]: 2026-04-17 23:38:29.663 [INFO][5423] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Apr 17 23:38:29.708574 containerd[1982]: 2026-04-17 23:38:29.663 [INFO][5423] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" iface="eth0" netns="" Apr 17 23:38:29.708574 containerd[1982]: 2026-04-17 23:38:29.663 [INFO][5423] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Apr 17 23:38:29.708574 containerd[1982]: 2026-04-17 23:38:29.663 [INFO][5423] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Apr 17 23:38:29.708574 containerd[1982]: 2026-04-17 23:38:29.696 [INFO][5430] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" HandleID="k8s-pod-network.d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:29.708574 containerd[1982]: 2026-04-17 23:38:29.696 [INFO][5430] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:29.708574 containerd[1982]: 2026-04-17 23:38:29.696 [INFO][5430] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:29.708574 containerd[1982]: 2026-04-17 23:38:29.702 [WARNING][5430] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" HandleID="k8s-pod-network.d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:29.708574 containerd[1982]: 2026-04-17 23:38:29.702 [INFO][5430] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" HandleID="k8s-pod-network.d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:29.708574 containerd[1982]: 2026-04-17 23:38:29.704 [INFO][5430] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:29.708574 containerd[1982]: 2026-04-17 23:38:29.706 [INFO][5423] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Apr 17 23:38:29.708574 containerd[1982]: time="2026-04-17T23:38:29.708505661Z" level=info msg="TearDown network for sandbox \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\" successfully" Apr 17 23:38:29.708574 containerd[1982]: time="2026-04-17T23:38:29.708531280Z" level=info msg="StopPodSandbox for \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\" returns successfully" Apr 17 23:38:29.968351 kubelet[3213]: I0417 23:38:29.968301 3213 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79bgv\" (UniqueName: \"kubernetes.io/projected/d3c6abe4-044b-4f66-a78f-bda6e0004d81-kube-api-access-79bgv\") pod \"d3c6abe4-044b-4f66-a78f-bda6e0004d81\" (UID: \"d3c6abe4-044b-4f66-a78f-bda6e0004d81\") " Apr 17 23:38:29.974042 kubelet[3213]: I0417 23:38:29.973615 3213 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d3c6abe4-044b-4f66-a78f-bda6e0004d81-nginx-config\") pod \"d3c6abe4-044b-4f66-a78f-bda6e0004d81\" (UID: \"d3c6abe4-044b-4f66-a78f-bda6e0004d81\") " Apr 17 23:38:29.974042 kubelet[3213]: I0417 23:38:29.973679 3213 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3c6abe4-044b-4f66-a78f-bda6e0004d81-whisker-ca-bundle\") pod \"d3c6abe4-044b-4f66-a78f-bda6e0004d81\" (UID: \"d3c6abe4-044b-4f66-a78f-bda6e0004d81\") " Apr 17 23:38:29.974042 kubelet[3213]: I0417 23:38:29.973731 3213 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d3c6abe4-044b-4f66-a78f-bda6e0004d81-whisker-backend-key-pair\") pod \"d3c6abe4-044b-4f66-a78f-bda6e0004d81\" (UID: \"d3c6abe4-044b-4f66-a78f-bda6e0004d81\") " Apr 17 23:38:29.988013 kubelet[3213]: I0417 23:38:29.987715 3213 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3c6abe4-044b-4f66-a78f-bda6e0004d81-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "d3c6abe4-044b-4f66-a78f-bda6e0004d81" (UID: "d3c6abe4-044b-4f66-a78f-bda6e0004d81"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:38:29.989831 kubelet[3213]: I0417 23:38:29.985650 3213 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3c6abe4-044b-4f66-a78f-bda6e0004d81-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "d3c6abe4-044b-4f66-a78f-bda6e0004d81" (UID: "d3c6abe4-044b-4f66-a78f-bda6e0004d81"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 17 23:38:29.996370 kubelet[3213]: I0417 23:38:29.996326 3213 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3c6abe4-044b-4f66-a78f-bda6e0004d81-kube-api-access-79bgv" (OuterVolumeSpecName: "kube-api-access-79bgv") pod "d3c6abe4-044b-4f66-a78f-bda6e0004d81" (UID: "d3c6abe4-044b-4f66-a78f-bda6e0004d81"). InnerVolumeSpecName "kube-api-access-79bgv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 17 23:38:29.998715 kubelet[3213]: I0417 23:38:29.997069 3213 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c6abe4-044b-4f66-a78f-bda6e0004d81-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "d3c6abe4-044b-4f66-a78f-bda6e0004d81" (UID: "d3c6abe4-044b-4f66-a78f-bda6e0004d81"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 17 23:38:30.000875 systemd[1]: var-lib-kubelet-pods-d3c6abe4\x2d044b\x2d4f66\x2da78f\x2dbda6e0004d81-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d79bgv.mount: Deactivated successfully. Apr 17 23:38:30.001149 systemd[1]: var-lib-kubelet-pods-d3c6abe4\x2d044b\x2d4f66\x2da78f\x2dbda6e0004d81-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 17 23:38:30.074862 kubelet[3213]: I0417 23:38:30.074815 3213 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d3c6abe4-044b-4f66-a78f-bda6e0004d81-whisker-ca-bundle\") on node \"ip-172-31-16-109\" DevicePath \"\"" Apr 17 23:38:30.074862 kubelet[3213]: I0417 23:38:30.074858 3213 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d3c6abe4-044b-4f66-a78f-bda6e0004d81-whisker-backend-key-pair\") on node \"ip-172-31-16-109\" DevicePath \"\"" Apr 17 23:38:30.074862 kubelet[3213]: I0417 23:38:30.074874 3213 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-79bgv\" (UniqueName: \"kubernetes.io/projected/d3c6abe4-044b-4f66-a78f-bda6e0004d81-kube-api-access-79bgv\") on node \"ip-172-31-16-109\" DevicePath \"\"" Apr 17 23:38:30.074862 kubelet[3213]: I0417 23:38:30.074887 3213 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/d3c6abe4-044b-4f66-a78f-bda6e0004d81-nginx-config\") on node \"ip-172-31-16-109\" DevicePath \"\"" Apr 17 23:38:30.745292 systemd[1]: Removed slice kubepods-besteffort-podd3c6abe4_044b_4f66_a78f_bda6e0004d81.slice - libcontainer container kubepods-besteffort-podd3c6abe4_044b_4f66_a78f_bda6e0004d81.slice. Apr 17 23:38:30.989785 containerd[1982]: time="2026-04-17T23:38:30.989101878Z" level=info msg="StopPodSandbox for \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\"" Apr 17 23:38:30.998851 kubelet[3213]: I0417 23:38:30.997891 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9800bd4a-154c-48f6-a218-21cd5db78865-whisker-ca-bundle\") pod \"whisker-557f77f47-pcvhf\" (UID: \"9800bd4a-154c-48f6-a218-21cd5db78865\") " pod="calico-system/whisker-557f77f47-pcvhf" Apr 17 23:38:30.998851 kubelet[3213]: I0417 23:38:30.997957 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9800bd4a-154c-48f6-a218-21cd5db78865-whisker-backend-key-pair\") pod \"whisker-557f77f47-pcvhf\" (UID: \"9800bd4a-154c-48f6-a218-21cd5db78865\") " pod="calico-system/whisker-557f77f47-pcvhf" Apr 17 23:38:30.998851 kubelet[3213]: I0417 23:38:30.998049 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9800bd4a-154c-48f6-a218-21cd5db78865-nginx-config\") pod \"whisker-557f77f47-pcvhf\" (UID: \"9800bd4a-154c-48f6-a218-21cd5db78865\") " pod="calico-system/whisker-557f77f47-pcvhf" Apr 17 23:38:30.998851 kubelet[3213]: I0417 23:38:30.998079 3213 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxh4t\" (UniqueName: \"kubernetes.io/projected/9800bd4a-154c-48f6-a218-21cd5db78865-kube-api-access-rxh4t\") pod \"whisker-557f77f47-pcvhf\" (UID: \"9800bd4a-154c-48f6-a218-21cd5db78865\") " pod="calico-system/whisker-557f77f47-pcvhf" Apr 17 23:38:31.013562 systemd[1]: Created slice kubepods-besteffort-pod9800bd4a_154c_48f6_a218_21cd5db78865.slice - libcontainer container kubepods-besteffort-pod9800bd4a_154c_48f6_a218_21cd5db78865.slice. Apr 17 23:38:31.139828 containerd[1982]: 2026-04-17 23:38:31.066 [INFO][5449] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Apr 17 23:38:31.139828 containerd[1982]: 2026-04-17 23:38:31.066 [INFO][5449] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" iface="eth0" netns="/var/run/netns/cni-5ad9dcbe-5c89-a511-a06c-3ba62f848c26" Apr 17 23:38:31.139828 containerd[1982]: 2026-04-17 23:38:31.067 [INFO][5449] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" iface="eth0" netns="/var/run/netns/cni-5ad9dcbe-5c89-a511-a06c-3ba62f848c26" Apr 17 23:38:31.139828 containerd[1982]: 2026-04-17 23:38:31.067 [INFO][5449] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" iface="eth0" netns="/var/run/netns/cni-5ad9dcbe-5c89-a511-a06c-3ba62f848c26" Apr 17 23:38:31.139828 containerd[1982]: 2026-04-17 23:38:31.067 [INFO][5449] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Apr 17 23:38:31.139828 containerd[1982]: 2026-04-17 23:38:31.067 [INFO][5449] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Apr 17 23:38:31.139828 containerd[1982]: 2026-04-17 23:38:31.104 [INFO][5457] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" HandleID="k8s-pod-network.8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0" Apr 17 23:38:31.139828 containerd[1982]: 2026-04-17 23:38:31.104 [INFO][5457] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:31.139828 containerd[1982]: 2026-04-17 23:38:31.104 [INFO][5457] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:31.139828 containerd[1982]: 2026-04-17 23:38:31.125 [WARNING][5457] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" HandleID="k8s-pod-network.8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0" Apr 17 23:38:31.139828 containerd[1982]: 2026-04-17 23:38:31.125 [INFO][5457] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" HandleID="k8s-pod-network.8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0" Apr 17 23:38:31.139828 containerd[1982]: 2026-04-17 23:38:31.128 [INFO][5457] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:31.139828 containerd[1982]: 2026-04-17 23:38:31.137 [INFO][5449] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Apr 17 23:38:31.140871 containerd[1982]: time="2026-04-17T23:38:31.140814330Z" level=info msg="TearDown network for sandbox \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\" successfully" Apr 17 23:38:31.141005 containerd[1982]: time="2026-04-17T23:38:31.140871836Z" level=info msg="StopPodSandbox for \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\" returns successfully" Apr 17 23:38:31.142027 containerd[1982]: time="2026-04-17T23:38:31.141991262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c8jqp,Uid:1f5613b0-e0ec-4ed6-ae4c-5a83ab1b043e,Namespace:kube-system,Attempt:1,}" Apr 17 23:38:31.148003 systemd[1]: run-netns-cni\x2d5ad9dcbe\x2d5c89\x2da511\x2da06c\x2d3ba62f848c26.mount: Deactivated successfully. Apr 17 23:38:31.320245 containerd[1982]: time="2026-04-17T23:38:31.320105816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-557f77f47-pcvhf,Uid:9800bd4a-154c-48f6-a218-21cd5db78865,Namespace:calico-system,Attempt:0,}" Apr 17 23:38:31.341014 (udev-worker)[5383]: Network interface NamePolicy= disabled on kernel command line. Apr 17 23:38:31.342086 systemd-networkd[1902]: calidcb52c98648: Link UP Apr 17 23:38:31.342569 systemd-networkd[1902]: calidcb52c98648: Gained carrier Apr 17 23:38:31.373275 containerd[1982]: 2026-04-17 23:38:31.233 [INFO][5466] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0 coredns-674b8bbfcf- kube-system 1f5613b0-e0ec-4ed6-ae4c-5a83ab1b043e 1012 0 2026-04-17 23:37:44 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-109 coredns-674b8bbfcf-c8jqp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidcb52c98648 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-c8jqp" WorkloadEndpoint="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-" Apr 17 23:38:31.373275 containerd[1982]: 2026-04-17 23:38:31.233 [INFO][5466] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-c8jqp" WorkloadEndpoint="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0" Apr 17 23:38:31.373275 containerd[1982]: 2026-04-17 23:38:31.276 [INFO][5477] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6" HandleID="k8s-pod-network.076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0" Apr 17 23:38:31.373275 containerd[1982]: 2026-04-17 23:38:31.288 [INFO][5477] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6" HandleID="k8s-pod-network.076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef990), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-109", "pod":"coredns-674b8bbfcf-c8jqp", "timestamp":"2026-04-17 23:38:31.27659705 +0000 UTC"}, Hostname:"ip-172-31-16-109", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003e51e0)} Apr 17 23:38:31.373275 containerd[1982]: 2026-04-17 23:38:31.289 [INFO][5477] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:31.373275 containerd[1982]: 2026-04-17 23:38:31.289 [INFO][5477] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:31.373275 containerd[1982]: 2026-04-17 23:38:31.289 [INFO][5477] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-109' Apr 17 23:38:31.373275 containerd[1982]: 2026-04-17 23:38:31.294 [INFO][5477] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6" host="ip-172-31-16-109" Apr 17 23:38:31.373275 containerd[1982]: 2026-04-17 23:38:31.302 [INFO][5477] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-109" Apr 17 23:38:31.373275 containerd[1982]: 2026-04-17 23:38:31.308 [INFO][5477] ipam/ipam.go 526: Trying affinity for 192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:31.373275 containerd[1982]: 2026-04-17 23:38:31.310 [INFO][5477] ipam/ipam.go 160: Attempting to load block cidr=192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:31.373275 containerd[1982]: 2026-04-17 23:38:31.313 [INFO][5477] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:31.373275 containerd[1982]: 2026-04-17 23:38:31.313 [INFO][5477] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.2.192/26 handle="k8s-pod-network.076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6" host="ip-172-31-16-109" Apr 17 23:38:31.373275 containerd[1982]: 2026-04-17 23:38:31.314 [INFO][5477] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6 Apr 17 23:38:31.373275 containerd[1982]: 2026-04-17 23:38:31.320 [INFO][5477] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.2.192/26 handle="k8s-pod-network.076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6" host="ip-172-31-16-109" Apr 17 23:38:31.373275 containerd[1982]: 2026-04-17 23:38:31.331 [INFO][5477] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.2.196/26] block=192.168.2.192/26 handle="k8s-pod-network.076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6" host="ip-172-31-16-109" Apr 17 23:38:31.373275 containerd[1982]: 2026-04-17 23:38:31.331 [INFO][5477] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.2.196/26] handle="k8s-pod-network.076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6" host="ip-172-31-16-109" Apr 17 23:38:31.373275 containerd[1982]: 2026-04-17 23:38:31.331 [INFO][5477] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:31.373275 containerd[1982]: 2026-04-17 23:38:31.331 [INFO][5477] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.2.196/26] IPv6=[] ContainerID="076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6" HandleID="k8s-pod-network.076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0" Apr 17 23:38:31.374866 containerd[1982]: 2026-04-17 23:38:31.337 [INFO][5466] cni-plugin/k8s.go 418: Populated endpoint ContainerID="076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-c8jqp" WorkloadEndpoint="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1f5613b0-e0ec-4ed6-ae4c-5a83ab1b043e", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"", Pod:"coredns-674b8bbfcf-c8jqp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcb52c98648", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:31.374866 containerd[1982]: 2026-04-17 23:38:31.337 [INFO][5466] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.196/32] ContainerID="076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-c8jqp" WorkloadEndpoint="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0" Apr 17 23:38:31.374866 containerd[1982]: 2026-04-17 23:38:31.337 [INFO][5466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidcb52c98648 ContainerID="076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-c8jqp" WorkloadEndpoint="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0" Apr 17 23:38:31.374866 containerd[1982]: 2026-04-17 23:38:31.343 [INFO][5466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-c8jqp" WorkloadEndpoint="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0" Apr 17 23:38:31.374866 containerd[1982]: 2026-04-17 23:38:31.343 [INFO][5466] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-c8jqp" WorkloadEndpoint="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1f5613b0-e0ec-4ed6-ae4c-5a83ab1b043e", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6", Pod:"coredns-674b8bbfcf-c8jqp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcb52c98648", MAC:"9a:41:fa:b4:8a:8a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:31.374866 containerd[1982]: 2026-04-17 23:38:31.359 [INFO][5466] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-c8jqp" WorkloadEndpoint="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0" Apr 17 23:38:31.444329 containerd[1982]: time="2026-04-17T23:38:31.443966627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:31.444651 containerd[1982]: time="2026-04-17T23:38:31.444547016Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:31.444963 containerd[1982]: time="2026-04-17T23:38:31.444823917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:31.447998 containerd[1982]: time="2026-04-17T23:38:31.446309418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:31.484356 systemd[1]: Started cri-containerd-076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6.scope - libcontainer container 076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6. Apr 17 23:38:31.571790 containerd[1982]: time="2026-04-17T23:38:31.570605842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c8jqp,Uid:1f5613b0-e0ec-4ed6-ae4c-5a83ab1b043e,Namespace:kube-system,Attempt:1,} returns sandbox id \"076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6\"" Apr 17 23:38:31.579529 containerd[1982]: time="2026-04-17T23:38:31.579469278Z" level=info msg="CreateContainer within sandbox \"076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 17 23:38:31.605384 systemd-networkd[1902]: calid2fde6422ae: Link UP Apr 17 23:38:31.606472 systemd-networkd[1902]: calid2fde6422ae: Gained carrier Apr 17 23:38:31.627834 containerd[1982]: time="2026-04-17T23:38:31.627257156Z" level=info msg="CreateContainer within sandbox \"076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"52d36dcb0a72d51443370a7b8949ed1f7d15cfad3fdd16d3f164739ad0eb4d45\"" Apr 17 23:38:31.629949 containerd[1982]: time="2026-04-17T23:38:31.628678661Z" level=info msg="StartContainer for \"52d36dcb0a72d51443370a7b8949ed1f7d15cfad3fdd16d3f164739ad0eb4d45\"" Apr 17 23:38:31.635969 containerd[1982]: 2026-04-17 23:38:31.443 [INFO][5485] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--109-k8s-whisker--557f77f47--pcvhf-eth0 whisker-557f77f47- calico-system 9800bd4a-154c-48f6-a218-21cd5db78865 1009 0 2026-04-17 23:38:30 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:557f77f47 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-16-109 whisker-557f77f47-pcvhf eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid2fde6422ae [] [] }} ContainerID="9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9" Namespace="calico-system" Pod="whisker-557f77f47-pcvhf" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--557f77f47--pcvhf-" Apr 17 23:38:31.635969 containerd[1982]: 2026-04-17 23:38:31.445 [INFO][5485] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9" Namespace="calico-system" Pod="whisker-557f77f47-pcvhf" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--557f77f47--pcvhf-eth0" Apr 17 23:38:31.635969 containerd[1982]: 2026-04-17 23:38:31.529 [INFO][5530] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9" HandleID="k8s-pod-network.9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9" Workload="ip--172--31--16--109-k8s-whisker--557f77f47--pcvhf-eth0" Apr 17 23:38:31.635969 containerd[1982]: 2026-04-17 23:38:31.543 [INFO][5530] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9" HandleID="k8s-pod-network.9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9" Workload="ip--172--31--16--109-k8s-whisker--557f77f47--pcvhf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fdaf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-109", "pod":"whisker-557f77f47-pcvhf", "timestamp":"2026-04-17 23:38:31.529404631 +0000 UTC"}, Hostname:"ip-172-31-16-109", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000379600)} Apr 17 23:38:31.635969 containerd[1982]: 2026-04-17 23:38:31.544 [INFO][5530] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:31.635969 containerd[1982]: 2026-04-17 23:38:31.544 [INFO][5530] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:31.635969 containerd[1982]: 2026-04-17 23:38:31.544 [INFO][5530] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-109' Apr 17 23:38:31.635969 containerd[1982]: 2026-04-17 23:38:31.547 [INFO][5530] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9" host="ip-172-31-16-109" Apr 17 23:38:31.635969 containerd[1982]: 2026-04-17 23:38:31.558 [INFO][5530] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-109" Apr 17 23:38:31.635969 containerd[1982]: 2026-04-17 23:38:31.566 [INFO][5530] ipam/ipam.go 526: Trying affinity for 192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:31.635969 containerd[1982]: 2026-04-17 23:38:31.571 [INFO][5530] ipam/ipam.go 160: Attempting to load block cidr=192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:31.635969 containerd[1982]: 2026-04-17 23:38:31.576 [INFO][5530] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:31.635969 containerd[1982]: 2026-04-17 23:38:31.578 [INFO][5530] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.2.192/26 handle="k8s-pod-network.9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9" host="ip-172-31-16-109" Apr 17 23:38:31.635969 containerd[1982]: 2026-04-17 23:38:31.581 [INFO][5530] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9 Apr 17 23:38:31.635969 containerd[1982]: 2026-04-17 23:38:31.592 [INFO][5530] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.2.192/26 handle="k8s-pod-network.9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9" host="ip-172-31-16-109" Apr 17 23:38:31.635969 containerd[1982]: 2026-04-17 23:38:31.599 [INFO][5530] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.2.197/26] block=192.168.2.192/26 handle="k8s-pod-network.9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9" host="ip-172-31-16-109" Apr 17 23:38:31.635969 containerd[1982]: 2026-04-17 23:38:31.599 [INFO][5530] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.2.197/26] handle="k8s-pod-network.9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9" host="ip-172-31-16-109" Apr 17 23:38:31.635969 containerd[1982]: 2026-04-17 23:38:31.599 [INFO][5530] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:31.635969 containerd[1982]: 2026-04-17 23:38:31.599 [INFO][5530] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.2.197/26] IPv6=[] ContainerID="9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9" HandleID="k8s-pod-network.9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9" Workload="ip--172--31--16--109-k8s-whisker--557f77f47--pcvhf-eth0" Apr 17 23:38:31.637880 containerd[1982]: 2026-04-17 23:38:31.602 [INFO][5485] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9" Namespace="calico-system" Pod="whisker-557f77f47-pcvhf" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--557f77f47--pcvhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-whisker--557f77f47--pcvhf-eth0", GenerateName:"whisker-557f77f47-", Namespace:"calico-system", SelfLink:"", UID:"9800bd4a-154c-48f6-a218-21cd5db78865", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"557f77f47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"", Pod:"whisker-557f77f47-pcvhf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid2fde6422ae", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:31.637880 containerd[1982]: 2026-04-17 23:38:31.602 [INFO][5485] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.197/32] ContainerID="9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9" Namespace="calico-system" Pod="whisker-557f77f47-pcvhf" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--557f77f47--pcvhf-eth0" Apr 17 23:38:31.637880 containerd[1982]: 2026-04-17 23:38:31.602 [INFO][5485] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2fde6422ae ContainerID="9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9" Namespace="calico-system" Pod="whisker-557f77f47-pcvhf" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--557f77f47--pcvhf-eth0" Apr 17 23:38:31.637880 containerd[1982]: 2026-04-17 23:38:31.606 [INFO][5485] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9" Namespace="calico-system" Pod="whisker-557f77f47-pcvhf" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--557f77f47--pcvhf-eth0" Apr 17 23:38:31.637880 containerd[1982]: 2026-04-17 23:38:31.607 [INFO][5485] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9" Namespace="calico-system" Pod="whisker-557f77f47-pcvhf" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--557f77f47--pcvhf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-whisker--557f77f47--pcvhf-eth0", GenerateName:"whisker-557f77f47-", Namespace:"calico-system", SelfLink:"", UID:"9800bd4a-154c-48f6-a218-21cd5db78865", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 38, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"557f77f47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9", Pod:"whisker-557f77f47-pcvhf", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.2.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid2fde6422ae", MAC:"e6:c8:3a:86:e9:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:31.637880 containerd[1982]: 2026-04-17 23:38:31.628 [INFO][5485] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9" Namespace="calico-system" Pod="whisker-557f77f47-pcvhf" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--557f77f47--pcvhf-eth0" Apr 17 23:38:31.684567 systemd[1]: Started cri-containerd-52d36dcb0a72d51443370a7b8949ed1f7d15cfad3fdd16d3f164739ad0eb4d45.scope - libcontainer container 52d36dcb0a72d51443370a7b8949ed1f7d15cfad3fdd16d3f164739ad0eb4d45. Apr 17 23:38:31.709587 containerd[1982]: time="2026-04-17T23:38:31.708216266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:31.709587 containerd[1982]: time="2026-04-17T23:38:31.708297593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:31.709587 containerd[1982]: time="2026-04-17T23:38:31.708319470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:31.709587 containerd[1982]: time="2026-04-17T23:38:31.708436789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:31.761919 systemd[1]: Started cri-containerd-9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9.scope - libcontainer container 9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9. Apr 17 23:38:31.773576 containerd[1982]: time="2026-04-17T23:38:31.772879116Z" level=info msg="StartContainer for \"52d36dcb0a72d51443370a7b8949ed1f7d15cfad3fdd16d3f164739ad0eb4d45\" returns successfully" Apr 17 23:38:31.827568 containerd[1982]: time="2026-04-17T23:38:31.826877728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-557f77f47-pcvhf,Uid:9800bd4a-154c-48f6-a218-21cd5db78865,Namespace:calico-system,Attempt:0,} returns sandbox id \"9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9\"" Apr 17 23:38:31.844310 containerd[1982]: time="2026-04-17T23:38:31.844259192Z" level=info msg="CreateContainer within sandbox \"9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 17 23:38:31.868259 containerd[1982]: time="2026-04-17T23:38:31.868072845Z" level=info msg="CreateContainer within sandbox \"9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"67d9ec85fa8d1359959095162872e6fd1871dcb1349f21b73954d01e173b3200\"" Apr 17 23:38:31.871014 containerd[1982]: time="2026-04-17T23:38:31.869744716Z" level=info msg="StartContainer for \"67d9ec85fa8d1359959095162872e6fd1871dcb1349f21b73954d01e173b3200\"" Apr 17 23:38:31.908924 systemd[1]: Started cri-containerd-67d9ec85fa8d1359959095162872e6fd1871dcb1349f21b73954d01e173b3200.scope - libcontainer container 67d9ec85fa8d1359959095162872e6fd1871dcb1349f21b73954d01e173b3200. Apr 17 23:38:31.976732 containerd[1982]: time="2026-04-17T23:38:31.975836648Z" level=info msg="StopPodSandbox for \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\"" Apr 17 23:38:31.981597 kubelet[3213]: I0417 23:38:31.981526 3213 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3c6abe4-044b-4f66-a78f-bda6e0004d81" path="/var/lib/kubelet/pods/d3c6abe4-044b-4f66-a78f-bda6e0004d81/volumes" Apr 17 23:38:31.986217 containerd[1982]: time="2026-04-17T23:38:31.986024742Z" level=info msg="StartContainer for \"67d9ec85fa8d1359959095162872e6fd1871dcb1349f21b73954d01e173b3200\" returns successfully" Apr 17 23:38:31.998489 containerd[1982]: time="2026-04-17T23:38:31.998427000Z" level=info msg="CreateContainer within sandbox \"9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 17 23:38:32.033195 containerd[1982]: time="2026-04-17T23:38:32.033045311Z" level=info msg="CreateContainer within sandbox \"9a1e58fe20ae8b3b691ffd8bf15694f743fb1f43d7e08b78dbca4d1b0e5747f9\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"08cbced22adab3ea664541656697381c4cf8c119580f50be2a3eb4f0b5900e93\"" Apr 17 23:38:32.034094 containerd[1982]: time="2026-04-17T23:38:32.033636257Z" level=info msg="StartContainer for \"08cbced22adab3ea664541656697381c4cf8c119580f50be2a3eb4f0b5900e93\"" Apr 17 23:38:32.088535 systemd[1]: Started cri-containerd-08cbced22adab3ea664541656697381c4cf8c119580f50be2a3eb4f0b5900e93.scope - libcontainer container 08cbced22adab3ea664541656697381c4cf8c119580f50be2a3eb4f0b5900e93. Apr 17 23:38:32.213839 containerd[1982]: 2026-04-17 23:38:32.097 [INFO][5693] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Apr 17 23:38:32.213839 containerd[1982]: 2026-04-17 23:38:32.098 [INFO][5693] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" iface="eth0" netns="/var/run/netns/cni-a4ad2273-3406-483f-6116-c51f40827e9c" Apr 17 23:38:32.213839 containerd[1982]: 2026-04-17 23:38:32.101 [INFO][5693] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" iface="eth0" netns="/var/run/netns/cni-a4ad2273-3406-483f-6116-c51f40827e9c" Apr 17 23:38:32.213839 containerd[1982]: 2026-04-17 23:38:32.102 [INFO][5693] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" iface="eth0" netns="/var/run/netns/cni-a4ad2273-3406-483f-6116-c51f40827e9c" Apr 17 23:38:32.213839 containerd[1982]: 2026-04-17 23:38:32.102 [INFO][5693] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Apr 17 23:38:32.213839 containerd[1982]: 2026-04-17 23:38:32.102 [INFO][5693] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Apr 17 23:38:32.213839 containerd[1982]: 2026-04-17 23:38:32.182 [INFO][5726] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" HandleID="k8s-pod-network.409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0" Apr 17 23:38:32.213839 containerd[1982]: 2026-04-17 23:38:32.182 [INFO][5726] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:32.213839 containerd[1982]: 2026-04-17 23:38:32.182 [INFO][5726] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:32.213839 containerd[1982]: 2026-04-17 23:38:32.194 [WARNING][5726] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" HandleID="k8s-pod-network.409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0" Apr 17 23:38:32.213839 containerd[1982]: 2026-04-17 23:38:32.194 [INFO][5726] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" HandleID="k8s-pod-network.409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0" Apr 17 23:38:32.213839 containerd[1982]: 2026-04-17 23:38:32.200 [INFO][5726] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:32.213839 containerd[1982]: 2026-04-17 23:38:32.204 [INFO][5693] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Apr 17 23:38:32.216360 containerd[1982]: time="2026-04-17T23:38:32.213965643Z" level=info msg="TearDown network for sandbox \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\" successfully" Apr 17 23:38:32.216360 containerd[1982]: time="2026-04-17T23:38:32.214137376Z" level=info msg="StopPodSandbox for \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\" returns successfully" Apr 17 23:38:32.220444 containerd[1982]: time="2026-04-17T23:38:32.219106592Z" level=info msg="StartContainer for \"08cbced22adab3ea664541656697381c4cf8c119580f50be2a3eb4f0b5900e93\" returns successfully" Apr 17 23:38:32.220444 containerd[1982]: time="2026-04-17T23:38:32.219481256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84cf778744-ml2v7,Uid:1bc2dee4-01d4-4b29-b645-fc728a206f14,Namespace:calico-system,Attempt:1,}" Apr 17 23:38:32.221729 systemd[1]: run-netns-cni\x2da4ad2273\x2d3406\x2d483f\x2d6116\x2dc51f40827e9c.mount: Deactivated successfully. Apr 17 23:38:32.386623 systemd-networkd[1902]: califc500cec28f: Link UP Apr 17 23:38:32.387193 systemd-networkd[1902]: califc500cec28f: Gained carrier Apr 17 23:38:32.411398 containerd[1982]: 2026-04-17 23:38:32.303 [INFO][5752] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0 calico-apiserver-84cf778744- calico-system 1bc2dee4-01d4-4b29-b645-fc728a206f14 1030 0 2026-04-17 23:37:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84cf778744 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-109 calico-apiserver-84cf778744-ml2v7 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] califc500cec28f [] [] }} ContainerID="fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0" Namespace="calico-system" Pod="calico-apiserver-84cf778744-ml2v7" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-" Apr 17 23:38:32.411398 containerd[1982]: 2026-04-17 23:38:32.304 [INFO][5752] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0" Namespace="calico-system" Pod="calico-apiserver-84cf778744-ml2v7" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0" Apr 17 23:38:32.411398 containerd[1982]: 2026-04-17 23:38:32.338 [INFO][5765] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0" HandleID="k8s-pod-network.fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0" Apr 17 23:38:32.411398 containerd[1982]: 2026-04-17 23:38:32.346 [INFO][5765] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0" HandleID="k8s-pod-network.fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002777c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-109", "pod":"calico-apiserver-84cf778744-ml2v7", "timestamp":"2026-04-17 23:38:32.338851926 +0000 UTC"}, Hostname:"ip-172-31-16-109", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002e4dc0)} Apr 17 23:38:32.411398 containerd[1982]: 2026-04-17 23:38:32.346 [INFO][5765] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:32.411398 containerd[1982]: 2026-04-17 23:38:32.346 [INFO][5765] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:32.411398 containerd[1982]: 2026-04-17 23:38:32.346 [INFO][5765] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-109' Apr 17 23:38:32.411398 containerd[1982]: 2026-04-17 23:38:32.349 [INFO][5765] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0" host="ip-172-31-16-109" Apr 17 23:38:32.411398 containerd[1982]: 2026-04-17 23:38:32.354 [INFO][5765] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-109" Apr 17 23:38:32.411398 containerd[1982]: 2026-04-17 23:38:32.359 [INFO][5765] ipam/ipam.go 526: Trying affinity for 192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:32.411398 containerd[1982]: 2026-04-17 23:38:32.361 [INFO][5765] ipam/ipam.go 160: Attempting to load block cidr=192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:32.411398 containerd[1982]: 2026-04-17 23:38:32.363 [INFO][5765] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:32.411398 containerd[1982]: 2026-04-17 23:38:32.363 [INFO][5765] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.2.192/26 handle="k8s-pod-network.fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0" host="ip-172-31-16-109" Apr 17 23:38:32.411398 containerd[1982]: 2026-04-17 23:38:32.365 [INFO][5765] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0 Apr 17 23:38:32.411398 containerd[1982]: 2026-04-17 23:38:32.369 [INFO][5765] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.2.192/26 handle="k8s-pod-network.fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0" host="ip-172-31-16-109" Apr 17 23:38:32.411398 containerd[1982]: 2026-04-17 23:38:32.379 [INFO][5765] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.2.198/26] block=192.168.2.192/26 handle="k8s-pod-network.fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0" host="ip-172-31-16-109" Apr 17 23:38:32.411398 containerd[1982]: 2026-04-17 23:38:32.379 [INFO][5765] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.2.198/26] handle="k8s-pod-network.fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0" host="ip-172-31-16-109" Apr 17 23:38:32.411398 containerd[1982]: 2026-04-17 23:38:32.379 [INFO][5765] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:32.411398 containerd[1982]: 2026-04-17 23:38:32.379 [INFO][5765] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.2.198/26] IPv6=[] ContainerID="fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0" HandleID="k8s-pod-network.fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0" Apr 17 23:38:32.413076 containerd[1982]: 2026-04-17 23:38:32.381 [INFO][5752] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0" Namespace="calico-system" Pod="calico-apiserver-84cf778744-ml2v7" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0", GenerateName:"calico-apiserver-84cf778744-", Namespace:"calico-system", SelfLink:"", UID:"1bc2dee4-01d4-4b29-b645-fc728a206f14", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84cf778744", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"", Pod:"calico-apiserver-84cf778744-ml2v7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califc500cec28f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:32.413076 containerd[1982]: 2026-04-17 23:38:32.382 [INFO][5752] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.198/32] ContainerID="fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0" Namespace="calico-system" Pod="calico-apiserver-84cf778744-ml2v7" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0" Apr 17 23:38:32.413076 containerd[1982]: 2026-04-17 23:38:32.382 [INFO][5752] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc500cec28f ContainerID="fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0" Namespace="calico-system" Pod="calico-apiserver-84cf778744-ml2v7" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0" Apr 17 23:38:32.413076 containerd[1982]: 2026-04-17 23:38:32.384 [INFO][5752] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0" Namespace="calico-system" Pod="calico-apiserver-84cf778744-ml2v7" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0" Apr 17 23:38:32.413076 containerd[1982]: 2026-04-17 23:38:32.384 [INFO][5752] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0" Namespace="calico-system" Pod="calico-apiserver-84cf778744-ml2v7" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0", GenerateName:"calico-apiserver-84cf778744-", Namespace:"calico-system", SelfLink:"", UID:"1bc2dee4-01d4-4b29-b645-fc728a206f14", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84cf778744", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0", Pod:"calico-apiserver-84cf778744-ml2v7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califc500cec28f", MAC:"56:9d:37:90:13:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:32.413076 containerd[1982]: 2026-04-17 23:38:32.399 [INFO][5752] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0" Namespace="calico-system" Pod="calico-apiserver-84cf778744-ml2v7" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0" Apr 17 23:38:32.455415 containerd[1982]: time="2026-04-17T23:38:32.455046291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:32.456839 containerd[1982]: time="2026-04-17T23:38:32.455688431Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:32.456839 containerd[1982]: time="2026-04-17T23:38:32.455770998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:32.456839 containerd[1982]: time="2026-04-17T23:38:32.455879410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:32.488007 systemd[1]: Started cri-containerd-fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0.scope - libcontainer container fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0. Apr 17 23:38:32.619204 containerd[1982]: time="2026-04-17T23:38:32.619152808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84cf778744-ml2v7,Uid:1bc2dee4-01d4-4b29-b645-fc728a206f14,Namespace:calico-system,Attempt:1,} returns sandbox id \"fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0\"" Apr 17 23:38:32.632110 containerd[1982]: time="2026-04-17T23:38:32.631907896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:38:32.798958 kubelet[3213]: I0417 23:38:32.798194 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-c8jqp" podStartSLOduration=48.797154795 podStartE2EDuration="48.797154795s" podCreationTimestamp="2026-04-17 23:37:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:38:32.749544517 +0000 UTC m=+54.921857150" watchObservedRunningTime="2026-04-17 23:38:32.797154795 +0000 UTC m=+54.969467430" Apr 17 23:38:32.815858 kubelet[3213]: I0417 23:38:32.815456 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-557f77f47-pcvhf" podStartSLOduration=2.815433485 podStartE2EDuration="2.815433485s" podCreationTimestamp="2026-04-17 23:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-17 23:38:32.796528228 +0000 UTC m=+54.968840860" watchObservedRunningTime="2026-04-17 23:38:32.815433485 +0000 UTC m=+54.987746117" Apr 17 23:38:32.981095 containerd[1982]: time="2026-04-17T23:38:32.980684469Z" level=info msg="StopPodSandbox for \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\"" Apr 17 23:38:32.981095 containerd[1982]: time="2026-04-17T23:38:32.980796076Z" level=info msg="StopPodSandbox for \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\"" Apr 17 23:38:33.177675 containerd[1982]: 2026-04-17 23:38:33.076 [INFO][5857] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Apr 17 23:38:33.177675 containerd[1982]: 2026-04-17 23:38:33.078 [INFO][5857] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" iface="eth0" netns="/var/run/netns/cni-f75f2c7f-2f58-0097-15dc-6c0fdb8d83ec" Apr 17 23:38:33.177675 containerd[1982]: 2026-04-17 23:38:33.080 [INFO][5857] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" iface="eth0" netns="/var/run/netns/cni-f75f2c7f-2f58-0097-15dc-6c0fdb8d83ec" Apr 17 23:38:33.177675 containerd[1982]: 2026-04-17 23:38:33.084 [INFO][5857] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" iface="eth0" netns="/var/run/netns/cni-f75f2c7f-2f58-0097-15dc-6c0fdb8d83ec" Apr 17 23:38:33.177675 containerd[1982]: 2026-04-17 23:38:33.084 [INFO][5857] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Apr 17 23:38:33.177675 containerd[1982]: 2026-04-17 23:38:33.084 [INFO][5857] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Apr 17 23:38:33.177675 containerd[1982]: 2026-04-17 23:38:33.149 [INFO][5874] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" HandleID="k8s-pod-network.06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Workload="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0" Apr 17 23:38:33.177675 containerd[1982]: 2026-04-17 23:38:33.150 [INFO][5874] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:33.177675 containerd[1982]: 2026-04-17 23:38:33.150 [INFO][5874] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:33.177675 containerd[1982]: 2026-04-17 23:38:33.160 [WARNING][5874] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" HandleID="k8s-pod-network.06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Workload="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0" Apr 17 23:38:33.177675 containerd[1982]: 2026-04-17 23:38:33.161 [INFO][5874] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" HandleID="k8s-pod-network.06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Workload="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0" Apr 17 23:38:33.177675 containerd[1982]: 2026-04-17 23:38:33.165 [INFO][5874] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:33.177675 containerd[1982]: 2026-04-17 23:38:33.173 [INFO][5857] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Apr 17 23:38:33.178637 containerd[1982]: time="2026-04-17T23:38:33.177799194Z" level=info msg="TearDown network for sandbox \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\" successfully" Apr 17 23:38:33.178637 containerd[1982]: time="2026-04-17T23:38:33.177838444Z" level=info msg="StopPodSandbox for \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\" returns successfully" Apr 17 23:38:33.185134 containerd[1982]: time="2026-04-17T23:38:33.183420747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-hzpfm,Uid:ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac,Namespace:calico-system,Attempt:1,}" Apr 17 23:38:33.184314 systemd[1]: run-netns-cni\x2df75f2c7f\x2d2f58\x2d0097\x2d15dc\x2d6c0fdb8d83ec.mount: Deactivated successfully. Apr 17 23:38:33.188501 containerd[1982]: 2026-04-17 23:38:33.076 [INFO][5856] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Apr 17 23:38:33.188501 containerd[1982]: 2026-04-17 23:38:33.076 [INFO][5856] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" iface="eth0" netns="/var/run/netns/cni-5d9bf335-5776-8fd2-63e4-5a0e4456ddb1" Apr 17 23:38:33.188501 containerd[1982]: 2026-04-17 23:38:33.079 [INFO][5856] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" iface="eth0" netns="/var/run/netns/cni-5d9bf335-5776-8fd2-63e4-5a0e4456ddb1" Apr 17 23:38:33.188501 containerd[1982]: 2026-04-17 23:38:33.082 [INFO][5856] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" iface="eth0" netns="/var/run/netns/cni-5d9bf335-5776-8fd2-63e4-5a0e4456ddb1" Apr 17 23:38:33.188501 containerd[1982]: 2026-04-17 23:38:33.082 [INFO][5856] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Apr 17 23:38:33.188501 containerd[1982]: 2026-04-17 23:38:33.083 [INFO][5856] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Apr 17 23:38:33.188501 containerd[1982]: 2026-04-17 23:38:33.165 [INFO][5873] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" HandleID="k8s-pod-network.e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Workload="ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0" Apr 17 23:38:33.188501 containerd[1982]: 2026-04-17 23:38:33.165 [INFO][5873] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:33.188501 containerd[1982]: 2026-04-17 23:38:33.165 [INFO][5873] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:33.188501 containerd[1982]: 2026-04-17 23:38:33.176 [WARNING][5873] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" HandleID="k8s-pod-network.e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Workload="ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0" Apr 17 23:38:33.188501 containerd[1982]: 2026-04-17 23:38:33.176 [INFO][5873] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" HandleID="k8s-pod-network.e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Workload="ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0" Apr 17 23:38:33.188501 containerd[1982]: 2026-04-17 23:38:33.180 [INFO][5873] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:33.188501 containerd[1982]: 2026-04-17 23:38:33.186 [INFO][5856] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Apr 17 23:38:33.191437 containerd[1982]: time="2026-04-17T23:38:33.191348366Z" level=info msg="TearDown network for sandbox \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\" successfully" Apr 17 23:38:33.191437 containerd[1982]: time="2026-04-17T23:38:33.191384640Z" level=info msg="StopPodSandbox for \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\" returns successfully" Apr 17 23:38:33.197171 systemd[1]: run-netns-cni\x2d5d9bf335\x2d5776\x2d8fd2\x2d63e4\x2d5a0e4456ddb1.mount: Deactivated successfully. Apr 17 23:38:33.212930 containerd[1982]: time="2026-04-17T23:38:33.212845843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v54pl,Uid:36a96baf-e8bf-431a-9ddf-e9ecc28a4802,Namespace:calico-system,Attempt:1,}" Apr 17 23:38:33.320896 systemd-networkd[1902]: calidcb52c98648: Gained IPv6LL Apr 17 23:38:33.512101 systemd-networkd[1902]: calicb5486a39c1: Link UP Apr 17 23:38:33.514897 systemd-networkd[1902]: calicb5486a39c1: Gained carrier Apr 17 23:38:33.540724 containerd[1982]: 2026-04-17 23:38:33.385 [INFO][5895] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0 csi-node-driver- calico-system 36a96baf-e8bf-431a-9ddf-e9ecc28a4802 1053 0 2026-04-17 23:37:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-16-109 csi-node-driver-v54pl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicb5486a39c1 [] [] }} ContainerID="f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b" Namespace="calico-system" Pod="csi-node-driver-v54pl" WorkloadEndpoint="ip--172--31--16--109-k8s-csi--node--driver--v54pl-" Apr 17 23:38:33.540724 containerd[1982]: 2026-04-17 23:38:33.386 [INFO][5895] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b" Namespace="calico-system" Pod="csi-node-driver-v54pl" WorkloadEndpoint="ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0" Apr 17 23:38:33.540724 containerd[1982]: 2026-04-17 23:38:33.449 [INFO][5914] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b" HandleID="k8s-pod-network.f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b" Workload="ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0" Apr 17 23:38:33.540724 containerd[1982]: 2026-04-17 23:38:33.462 [INFO][5914] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b" HandleID="k8s-pod-network.f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b" Workload="ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000380140), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-109", "pod":"csi-node-driver-v54pl", "timestamp":"2026-04-17 23:38:33.449338923 +0000 UTC"}, Hostname:"ip-172-31-16-109", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00029a000)} Apr 17 23:38:33.540724 containerd[1982]: 2026-04-17 23:38:33.462 [INFO][5914] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:33.540724 containerd[1982]: 2026-04-17 23:38:33.462 [INFO][5914] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:33.540724 containerd[1982]: 2026-04-17 23:38:33.462 [INFO][5914] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-109' Apr 17 23:38:33.540724 containerd[1982]: 2026-04-17 23:38:33.465 [INFO][5914] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b" host="ip-172-31-16-109" Apr 17 23:38:33.540724 containerd[1982]: 2026-04-17 23:38:33.472 [INFO][5914] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-109" Apr 17 23:38:33.540724 containerd[1982]: 2026-04-17 23:38:33.478 [INFO][5914] ipam/ipam.go 526: Trying affinity for 192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:33.540724 containerd[1982]: 2026-04-17 23:38:33.481 [INFO][5914] ipam/ipam.go 160: Attempting to load block cidr=192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:33.540724 containerd[1982]: 2026-04-17 23:38:33.483 [INFO][5914] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:33.540724 containerd[1982]: 2026-04-17 23:38:33.483 [INFO][5914] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.2.192/26 handle="k8s-pod-network.f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b" host="ip-172-31-16-109" Apr 17 23:38:33.540724 containerd[1982]: 2026-04-17 23:38:33.485 [INFO][5914] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b Apr 17 23:38:33.540724 containerd[1982]: 2026-04-17 23:38:33.489 [INFO][5914] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.2.192/26 handle="k8s-pod-network.f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b" host="ip-172-31-16-109" Apr 17 23:38:33.540724 containerd[1982]: 2026-04-17 23:38:33.503 [INFO][5914] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.2.199/26] block=192.168.2.192/26 handle="k8s-pod-network.f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b" host="ip-172-31-16-109" Apr 17 23:38:33.540724 containerd[1982]: 2026-04-17 23:38:33.503 [INFO][5914] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.2.199/26] handle="k8s-pod-network.f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b" host="ip-172-31-16-109" Apr 17 23:38:33.540724 containerd[1982]: 2026-04-17 23:38:33.503 [INFO][5914] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:33.540724 containerd[1982]: 2026-04-17 23:38:33.503 [INFO][5914] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.2.199/26] IPv6=[] ContainerID="f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b" HandleID="k8s-pod-network.f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b" Workload="ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0" Apr 17 23:38:33.550033 containerd[1982]: 2026-04-17 23:38:33.506 [INFO][5895] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b" Namespace="calico-system" Pod="csi-node-driver-v54pl" WorkloadEndpoint="ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36a96baf-e8bf-431a-9ddf-e9ecc28a4802", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"", Pod:"csi-node-driver-v54pl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicb5486a39c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:33.550033 containerd[1982]: 2026-04-17 23:38:33.506 [INFO][5895] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.199/32] ContainerID="f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b" Namespace="calico-system" Pod="csi-node-driver-v54pl" WorkloadEndpoint="ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0" Apr 17 23:38:33.550033 containerd[1982]: 2026-04-17 23:38:33.506 [INFO][5895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb5486a39c1 ContainerID="f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b" Namespace="calico-system" Pod="csi-node-driver-v54pl" WorkloadEndpoint="ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0" Apr 17 23:38:33.550033 containerd[1982]: 2026-04-17 23:38:33.509 [INFO][5895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b" Namespace="calico-system" Pod="csi-node-driver-v54pl" WorkloadEndpoint="ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0" Apr 17 23:38:33.550033 containerd[1982]: 2026-04-17 23:38:33.509 [INFO][5895] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b" Namespace="calico-system" Pod="csi-node-driver-v54pl" WorkloadEndpoint="ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36a96baf-e8bf-431a-9ddf-e9ecc28a4802", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b", Pod:"csi-node-driver-v54pl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicb5486a39c1", MAC:"06:ec:1e:ba:c6:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:33.550033 containerd[1982]: 2026-04-17 23:38:33.531 [INFO][5895] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b" Namespace="calico-system" Pod="csi-node-driver-v54pl" WorkloadEndpoint="ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0" Apr 17 23:38:33.641783 systemd-networkd[1902]: calid2fde6422ae: Gained IPv6LL Apr 17 23:38:33.657819 containerd[1982]: time="2026-04-17T23:38:33.657312158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:33.657819 containerd[1982]: time="2026-04-17T23:38:33.657450752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:33.657819 containerd[1982]: time="2026-04-17T23:38:33.657473576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:33.658161 containerd[1982]: time="2026-04-17T23:38:33.657876195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:33.680628 systemd-networkd[1902]: calica1f9d38416: Link UP Apr 17 23:38:33.682740 systemd-networkd[1902]: calica1f9d38416: Gained carrier Apr 17 23:38:33.720940 systemd[1]: Started cri-containerd-f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b.scope - libcontainer container f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b. Apr 17 23:38:33.723405 containerd[1982]: 2026-04-17 23:38:33.399 [INFO][5886] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0 goldmane-5b85766d88- calico-system ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac 1052 0 2026-04-17 23:37:56 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-16-109 goldmane-5b85766d88-hzpfm eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calica1f9d38416 [] [] }} ContainerID="973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc" Namespace="calico-system" Pod="goldmane-5b85766d88-hzpfm" WorkloadEndpoint="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-" Apr 17 23:38:33.723405 containerd[1982]: 2026-04-17 23:38:33.399 [INFO][5886] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc" Namespace="calico-system" Pod="goldmane-5b85766d88-hzpfm" WorkloadEndpoint="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0" Apr 17 23:38:33.723405 containerd[1982]: 2026-04-17 23:38:33.468 [INFO][5919] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc" HandleID="k8s-pod-network.973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc" Workload="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0" Apr 17 23:38:33.723405 containerd[1982]: 2026-04-17 23:38:33.477 [INFO][5919] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc" HandleID="k8s-pod-network.973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc" Workload="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e180), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-109", "pod":"goldmane-5b85766d88-hzpfm", "timestamp":"2026-04-17 23:38:33.468295858 +0000 UTC"}, Hostname:"ip-172-31-16-109", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004b6000)} Apr 17 23:38:33.723405 containerd[1982]: 2026-04-17 23:38:33.477 [INFO][5919] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:33.723405 containerd[1982]: 2026-04-17 23:38:33.503 [INFO][5919] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:33.723405 containerd[1982]: 2026-04-17 23:38:33.503 [INFO][5919] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-109' Apr 17 23:38:33.723405 containerd[1982]: 2026-04-17 23:38:33.574 [INFO][5919] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc" host="ip-172-31-16-109" Apr 17 23:38:33.723405 containerd[1982]: 2026-04-17 23:38:33.585 [INFO][5919] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-109" Apr 17 23:38:33.723405 containerd[1982]: 2026-04-17 23:38:33.598 [INFO][5919] ipam/ipam.go 526: Trying affinity for 192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:33.723405 containerd[1982]: 2026-04-17 23:38:33.608 [INFO][5919] ipam/ipam.go 160: Attempting to load block cidr=192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:33.723405 containerd[1982]: 2026-04-17 23:38:33.616 [INFO][5919] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:33.723405 containerd[1982]: 2026-04-17 23:38:33.617 [INFO][5919] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.2.192/26 handle="k8s-pod-network.973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc" host="ip-172-31-16-109" Apr 17 23:38:33.723405 containerd[1982]: 2026-04-17 23:38:33.626 [INFO][5919] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc Apr 17 23:38:33.723405 containerd[1982]: 2026-04-17 23:38:33.640 [INFO][5919] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.2.192/26 handle="k8s-pod-network.973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc" host="ip-172-31-16-109" Apr 17 23:38:33.723405 containerd[1982]: 2026-04-17 23:38:33.666 [INFO][5919] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.2.200/26] block=192.168.2.192/26 handle="k8s-pod-network.973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc" host="ip-172-31-16-109" Apr 17 23:38:33.723405 containerd[1982]: 2026-04-17 23:38:33.666 [INFO][5919] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.2.200/26] handle="k8s-pod-network.973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc" host="ip-172-31-16-109" Apr 17 23:38:33.723405 containerd[1982]: 2026-04-17 23:38:33.666 [INFO][5919] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:33.723405 containerd[1982]: 2026-04-17 23:38:33.666 [INFO][5919] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.2.200/26] IPv6=[] ContainerID="973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc" HandleID="k8s-pod-network.973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc" Workload="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0" Apr 17 23:38:33.725434 containerd[1982]: 2026-04-17 23:38:33.673 [INFO][5886] cni-plugin/k8s.go 418: Populated endpoint ContainerID="973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc" Namespace="calico-system" Pod="goldmane-5b85766d88-hzpfm" WorkloadEndpoint="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"", Pod:"goldmane-5b85766d88-hzpfm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calica1f9d38416", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:33.725434 containerd[1982]: 2026-04-17 23:38:33.673 [INFO][5886] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.200/32] ContainerID="973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc" Namespace="calico-system" Pod="goldmane-5b85766d88-hzpfm" WorkloadEndpoint="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0" Apr 17 23:38:33.725434 containerd[1982]: 2026-04-17 23:38:33.673 [INFO][5886] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica1f9d38416 ContainerID="973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc" Namespace="calico-system" Pod="goldmane-5b85766d88-hzpfm" WorkloadEndpoint="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0" Apr 17 23:38:33.725434 containerd[1982]: 2026-04-17 23:38:33.683 [INFO][5886] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc" Namespace="calico-system" Pod="goldmane-5b85766d88-hzpfm" WorkloadEndpoint="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0" Apr 17 23:38:33.725434 containerd[1982]: 2026-04-17 23:38:33.686 [INFO][5886] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc" Namespace="calico-system" Pod="goldmane-5b85766d88-hzpfm" WorkloadEndpoint="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc", Pod:"goldmane-5b85766d88-hzpfm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calica1f9d38416", MAC:"16:aa:80:af:86:38", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:33.725434 containerd[1982]: 2026-04-17 23:38:33.712 [INFO][5886] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc" Namespace="calico-system" Pod="goldmane-5b85766d88-hzpfm" WorkloadEndpoint="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0" Apr 17 23:38:33.768311 systemd-networkd[1902]: califc500cec28f: Gained IPv6LL Apr 17 23:38:33.781723 containerd[1982]: time="2026-04-17T23:38:33.780577241Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:33.783886 containerd[1982]: time="2026-04-17T23:38:33.783536055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:33.789284 containerd[1982]: time="2026-04-17T23:38:33.785817683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:33.789284 containerd[1982]: time="2026-04-17T23:38:33.785948665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:33.838450 systemd[1]: Started cri-containerd-973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc.scope - libcontainer container 973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc. Apr 17 23:38:33.850216 containerd[1982]: time="2026-04-17T23:38:33.850174782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v54pl,Uid:36a96baf-e8bf-431a-9ddf-e9ecc28a4802,Namespace:calico-system,Attempt:1,} returns sandbox id \"f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b\"" Apr 17 23:38:33.970545 containerd[1982]: time="2026-04-17T23:38:33.970477761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-hzpfm,Uid:ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac,Namespace:calico-system,Attempt:1,} returns sandbox id \"973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc\"" Apr 17 23:38:34.922162 systemd-networkd[1902]: calicb5486a39c1: Gained IPv6LL Apr 17 23:38:35.002396 containerd[1982]: time="2026-04-17T23:38:35.002188206Z" level=info msg="StopPodSandbox for \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\"" Apr 17 23:38:35.219301 containerd[1982]: 2026-04-17 23:38:35.096 [INFO][6080] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Apr 17 23:38:35.219301 containerd[1982]: 2026-04-17 23:38:35.097 [INFO][6080] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" iface="eth0" netns="/var/run/netns/cni-c265c6da-e689-436f-aad5-4d1019d15cfe" Apr 17 23:38:35.219301 containerd[1982]: 2026-04-17 23:38:35.097 [INFO][6080] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" iface="eth0" netns="/var/run/netns/cni-c265c6da-e689-436f-aad5-4d1019d15cfe" Apr 17 23:38:35.219301 containerd[1982]: 2026-04-17 23:38:35.098 [INFO][6080] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" iface="eth0" netns="/var/run/netns/cni-c265c6da-e689-436f-aad5-4d1019d15cfe" Apr 17 23:38:35.219301 containerd[1982]: 2026-04-17 23:38:35.098 [INFO][6080] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Apr 17 23:38:35.219301 containerd[1982]: 2026-04-17 23:38:35.098 [INFO][6080] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Apr 17 23:38:35.219301 containerd[1982]: 2026-04-17 23:38:35.186 [INFO][6087] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" HandleID="k8s-pod-network.cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0" Apr 17 23:38:35.219301 containerd[1982]: 2026-04-17 23:38:35.189 [INFO][6087] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:35.219301 containerd[1982]: 2026-04-17 23:38:35.190 [INFO][6087] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:35.219301 containerd[1982]: 2026-04-17 23:38:35.210 [WARNING][6087] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" HandleID="k8s-pod-network.cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0" Apr 17 23:38:35.219301 containerd[1982]: 2026-04-17 23:38:35.210 [INFO][6087] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" HandleID="k8s-pod-network.cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0" Apr 17 23:38:35.219301 containerd[1982]: 2026-04-17 23:38:35.212 [INFO][6087] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:35.219301 containerd[1982]: 2026-04-17 23:38:35.216 [INFO][6080] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Apr 17 23:38:35.222371 containerd[1982]: time="2026-04-17T23:38:35.219808441Z" level=info msg="TearDown network for sandbox \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\" successfully" Apr 17 23:38:35.222371 containerd[1982]: time="2026-04-17T23:38:35.219843140Z" level=info msg="StopPodSandbox for \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\" returns successfully" Apr 17 23:38:35.224646 systemd[1]: run-netns-cni\x2dc265c6da\x2de689\x2d436f\x2daad5\x2d4d1019d15cfe.mount: Deactivated successfully. Apr 17 23:38:35.229919 containerd[1982]: time="2026-04-17T23:38:35.229883909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84cf778744-tr4mm,Uid:cec6c23d-ce9d-4878-9eb1-de1fc8bcdc4b,Namespace:calico-system,Attempt:1,}" Apr 17 23:38:35.240273 systemd-networkd[1902]: calica1f9d38416: Gained IPv6LL Apr 17 23:38:35.513988 systemd-networkd[1902]: cali904775ccd0c: Link UP Apr 17 23:38:35.518450 systemd-networkd[1902]: cali904775ccd0c: Gained carrier Apr 17 23:38:35.569382 containerd[1982]: 2026-04-17 23:38:35.364 [INFO][6097] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0 calico-apiserver-84cf778744- calico-system cec6c23d-ce9d-4878-9eb1-de1fc8bcdc4b 1069 0 2026-04-17 23:37:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84cf778744 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-109 calico-apiserver-84cf778744-tr4mm eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali904775ccd0c [] [] }} ContainerID="44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2" Namespace="calico-system" Pod="calico-apiserver-84cf778744-tr4mm" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-" Apr 17 23:38:35.569382 containerd[1982]: 2026-04-17 23:38:35.364 [INFO][6097] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2" Namespace="calico-system" Pod="calico-apiserver-84cf778744-tr4mm" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0" Apr 17 23:38:35.569382 containerd[1982]: 2026-04-17 23:38:35.432 [INFO][6111] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2" HandleID="k8s-pod-network.44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0" Apr 17 23:38:35.569382 containerd[1982]: 2026-04-17 23:38:35.446 [INFO][6111] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2" HandleID="k8s-pod-network.44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fbe80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-109", "pod":"calico-apiserver-84cf778744-tr4mm", "timestamp":"2026-04-17 23:38:35.432801096 +0000 UTC"}, Hostname:"ip-172-31-16-109", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003c1a20)} Apr 17 23:38:35.569382 containerd[1982]: 2026-04-17 23:38:35.446 [INFO][6111] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:35.569382 containerd[1982]: 2026-04-17 23:38:35.446 [INFO][6111] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:35.569382 containerd[1982]: 2026-04-17 23:38:35.446 [INFO][6111] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-109' Apr 17 23:38:35.569382 containerd[1982]: 2026-04-17 23:38:35.451 [INFO][6111] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2" host="ip-172-31-16-109" Apr 17 23:38:35.569382 containerd[1982]: 2026-04-17 23:38:35.459 [INFO][6111] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-16-109" Apr 17 23:38:35.569382 containerd[1982]: 2026-04-17 23:38:35.466 [INFO][6111] ipam/ipam.go 526: Trying affinity for 192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:35.569382 containerd[1982]: 2026-04-17 23:38:35.469 [INFO][6111] ipam/ipam.go 160: Attempting to load block cidr=192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:35.569382 containerd[1982]: 2026-04-17 23:38:35.472 [INFO][6111] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.2.192/26 host="ip-172-31-16-109" Apr 17 23:38:35.569382 containerd[1982]: 2026-04-17 23:38:35.472 [INFO][6111] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.2.192/26 handle="k8s-pod-network.44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2" host="ip-172-31-16-109" Apr 17 23:38:35.569382 containerd[1982]: 2026-04-17 23:38:35.475 [INFO][6111] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2 Apr 17 23:38:35.569382 containerd[1982]: 2026-04-17 23:38:35.485 [INFO][6111] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.2.192/26 handle="k8s-pod-network.44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2" host="ip-172-31-16-109" Apr 17 23:38:35.569382 containerd[1982]: 2026-04-17 23:38:35.498 [INFO][6111] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.2.201/26] block=192.168.2.192/26 handle="k8s-pod-network.44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2" host="ip-172-31-16-109" Apr 17 23:38:35.569382 containerd[1982]: 2026-04-17 23:38:35.498 [INFO][6111] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.2.201/26] handle="k8s-pod-network.44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2" host="ip-172-31-16-109" Apr 17 23:38:35.569382 containerd[1982]: 2026-04-17 23:38:35.498 [INFO][6111] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:35.569382 containerd[1982]: 2026-04-17 23:38:35.498 [INFO][6111] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.2.201/26] IPv6=[] ContainerID="44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2" HandleID="k8s-pod-network.44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0" Apr 17 23:38:35.572545 containerd[1982]: 2026-04-17 23:38:35.505 [INFO][6097] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2" Namespace="calico-system" Pod="calico-apiserver-84cf778744-tr4mm" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0", GenerateName:"calico-apiserver-84cf778744-", Namespace:"calico-system", SelfLink:"", UID:"cec6c23d-ce9d-4878-9eb1-de1fc8bcdc4b", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84cf778744", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"", Pod:"calico-apiserver-84cf778744-tr4mm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali904775ccd0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:35.572545 containerd[1982]: 2026-04-17 23:38:35.505 [INFO][6097] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.2.201/32] ContainerID="44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2" Namespace="calico-system" Pod="calico-apiserver-84cf778744-tr4mm" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0" Apr 17 23:38:35.572545 containerd[1982]: 2026-04-17 23:38:35.505 [INFO][6097] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali904775ccd0c ContainerID="44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2" Namespace="calico-system" Pod="calico-apiserver-84cf778744-tr4mm" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0" Apr 17 23:38:35.572545 containerd[1982]: 2026-04-17 23:38:35.517 [INFO][6097] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2" Namespace="calico-system" Pod="calico-apiserver-84cf778744-tr4mm" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0" Apr 17 23:38:35.572545 containerd[1982]: 2026-04-17 23:38:35.519 [INFO][6097] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2" Namespace="calico-system" Pod="calico-apiserver-84cf778744-tr4mm" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0", GenerateName:"calico-apiserver-84cf778744-", Namespace:"calico-system", SelfLink:"", UID:"cec6c23d-ce9d-4878-9eb1-de1fc8bcdc4b", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84cf778744", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2", Pod:"calico-apiserver-84cf778744-tr4mm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali904775ccd0c", MAC:"ee:c8:6d:b0:b3:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:35.572545 containerd[1982]: 2026-04-17 23:38:35.559 [INFO][6097] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2" Namespace="calico-system" Pod="calico-apiserver-84cf778744-tr4mm" WorkloadEndpoint="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0" Apr 17 23:38:35.690132 containerd[1982]: time="2026-04-17T23:38:35.689839801Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 17 23:38:35.692391 containerd[1982]: time="2026-04-17T23:38:35.689922751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 17 23:38:35.692391 containerd[1982]: time="2026-04-17T23:38:35.690964745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:35.692391 containerd[1982]: time="2026-04-17T23:38:35.691113944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 17 23:38:35.765057 systemd[1]: Started cri-containerd-44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2.scope - libcontainer container 44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2. Apr 17 23:38:35.853085 containerd[1982]: time="2026-04-17T23:38:35.852390194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84cf778744-tr4mm,Uid:cec6c23d-ce9d-4878-9eb1-de1fc8bcdc4b,Namespace:calico-system,Attempt:1,} returns sandbox id \"44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2\"" Apr 17 23:38:36.224546 systemd[1]: run-containerd-runc-k8s.io-44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2-runc.f0XRWV.mount: Deactivated successfully. Apr 17 23:38:36.568781 containerd[1982]: time="2026-04-17T23:38:36.568617123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 17 23:38:36.609643 containerd[1982]: time="2026-04-17T23:38:36.608374385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:36.611390 containerd[1982]: time="2026-04-17T23:38:36.609948448Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.977994202s" Apr 17 23:38:36.611390 containerd[1982]: time="2026-04-17T23:38:36.610003836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:38:36.611390 containerd[1982]: time="2026-04-17T23:38:36.610586904Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:36.612003 containerd[1982]: time="2026-04-17T23:38:36.611967175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:36.626238 containerd[1982]: time="2026-04-17T23:38:36.626187031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 17 23:38:36.705114 containerd[1982]: time="2026-04-17T23:38:36.705027681Z" level=info msg="CreateContainer within sandbox \"fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:38:36.722476 containerd[1982]: time="2026-04-17T23:38:36.722431365Z" level=info msg="CreateContainer within sandbox \"fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d5f4fde03c2bdbdfcf6d36c4f81a6b1b5b8b2d9300597f2936d4a1f132a6231d\"" Apr 17 23:38:36.735380 containerd[1982]: time="2026-04-17T23:38:36.735325125Z" level=info msg="StartContainer for \"d5f4fde03c2bdbdfcf6d36c4f81a6b1b5b8b2d9300597f2936d4a1f132a6231d\"" Apr 17 23:38:36.784006 systemd[1]: run-containerd-runc-k8s.io-d5f4fde03c2bdbdfcf6d36c4f81a6b1b5b8b2d9300597f2936d4a1f132a6231d-runc.07Lv9O.mount: Deactivated successfully. Apr 17 23:38:36.793911 systemd[1]: Started cri-containerd-d5f4fde03c2bdbdfcf6d36c4f81a6b1b5b8b2d9300597f2936d4a1f132a6231d.scope - libcontainer container d5f4fde03c2bdbdfcf6d36c4f81a6b1b5b8b2d9300597f2936d4a1f132a6231d. Apr 17 23:38:36.853714 containerd[1982]: time="2026-04-17T23:38:36.852057401Z" level=info msg="StartContainer for \"d5f4fde03c2bdbdfcf6d36c4f81a6b1b5b8b2d9300597f2936d4a1f132a6231d\" returns successfully" Apr 17 23:38:37.096076 kubelet[3213]: I0417 23:38:37.095997 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-84cf778744-ml2v7" podStartSLOduration=37.015043024 podStartE2EDuration="41.020228235s" podCreationTimestamp="2026-04-17 23:37:56 +0000 UTC" firstStartedPulling="2026-04-17 23:38:32.620838988 +0000 UTC m=+54.793151608" lastFinishedPulling="2026-04-17 23:38:36.626024206 +0000 UTC m=+58.798336819" observedRunningTime="2026-04-17 23:38:37.01833579 +0000 UTC m=+59.190648422" watchObservedRunningTime="2026-04-17 23:38:37.020228235 +0000 UTC m=+59.192540862" Apr 17 23:38:37.416186 systemd-networkd[1902]: cali904775ccd0c: Gained IPv6LL Apr 17 23:38:38.244725 systemd[1]: Started sshd@7-172.31.16.109:22-20.229.252.112:40674.service - OpenSSH per-connection server daemon (20.229.252.112:40674). Apr 17 23:38:38.432988 containerd[1982]: time="2026-04-17T23:38:38.432816003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:38.437178 containerd[1982]: time="2026-04-17T23:38:38.435209954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 17 23:38:38.437728 containerd[1982]: time="2026-04-17T23:38:38.437367818Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:38.445096 containerd[1982]: time="2026-04-17T23:38:38.444232443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:38.452431 containerd[1982]: time="2026-04-17T23:38:38.452345856Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.826003622s" Apr 17 23:38:38.454561 containerd[1982]: time="2026-04-17T23:38:38.454298822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 17 23:38:38.556383 kubelet[3213]: I0417 23:38:38.555858 3213 scope.go:117] "RemoveContainer" containerID="9b571fcace6a47049e75e0b55e65d93ddccf285c6e20edea8914d24a45b81fbd" Apr 17 23:38:38.571152 kubelet[3213]: I0417 23:38:38.571113 3213 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:38:38.613371 containerd[1982]: time="2026-04-17T23:38:38.613183590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 17 23:38:38.697828 containerd[1982]: time="2026-04-17T23:38:38.697233504Z" level=info msg="CreateContainer within sandbox \"f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 17 23:38:38.737728 containerd[1982]: time="2026-04-17T23:38:38.736780966Z" level=info msg="CreateContainer within sandbox \"f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f816a58692ef25de1f09364c88ed2d68d06d359539d48921b0c7342a20fca2ba\"" Apr 17 23:38:38.796405 containerd[1982]: time="2026-04-17T23:38:38.793961338Z" level=info msg="StartContainer for \"f816a58692ef25de1f09364c88ed2d68d06d359539d48921b0c7342a20fca2ba\"" Apr 17 23:38:38.904985 systemd[1]: Started cri-containerd-f816a58692ef25de1f09364c88ed2d68d06d359539d48921b0c7342a20fca2ba.scope - libcontainer container f816a58692ef25de1f09364c88ed2d68d06d359539d48921b0c7342a20fca2ba. Apr 17 23:38:38.935729 containerd[1982]: time="2026-04-17T23:38:38.935606864Z" level=info msg="RemoveContainer for \"9b571fcace6a47049e75e0b55e65d93ddccf285c6e20edea8914d24a45b81fbd\"" Apr 17 23:38:39.063886 containerd[1982]: time="2026-04-17T23:38:39.063770722Z" level=info msg="StartContainer for \"f816a58692ef25de1f09364c88ed2d68d06d359539d48921b0c7342a20fca2ba\" returns successfully" Apr 17 23:38:39.082747 containerd[1982]: time="2026-04-17T23:38:39.081928638Z" level=info msg="RemoveContainer for \"9b571fcace6a47049e75e0b55e65d93ddccf285c6e20edea8914d24a45b81fbd\" returns successfully" Apr 17 23:38:39.088040 kubelet[3213]: E0417 23:38:39.066649 3213 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36a96baf_e8bf_431a_9ddf_e9ecc28a4802.slice/cri-containerd-f816a58692ef25de1f09364c88ed2d68d06d359539d48921b0c7342a20fca2ba.scope\": RecentStats: unable to find data in memory cache]" Apr 17 23:38:39.111641 kubelet[3213]: I0417 23:38:39.111004 3213 scope.go:117] "RemoveContainer" containerID="3a967770374769b64bd24b7e8e0f42fc5c31604dfde90eaebcc401b37ab234b3" Apr 17 23:38:39.113951 containerd[1982]: time="2026-04-17T23:38:39.113914304Z" level=info msg="RemoveContainer for \"3a967770374769b64bd24b7e8e0f42fc5c31604dfde90eaebcc401b37ab234b3\"" Apr 17 23:38:39.122765 containerd[1982]: time="2026-04-17T23:38:39.122717888Z" level=info msg="RemoveContainer for \"3a967770374769b64bd24b7e8e0f42fc5c31604dfde90eaebcc401b37ab234b3\" returns successfully" Apr 17 23:38:39.124593 containerd[1982]: time="2026-04-17T23:38:39.124364367Z" level=info msg="StopPodSandbox for \"c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b\"" Apr 17 23:38:39.342904 containerd[1982]: 2026-04-17 23:38:39.188 [WARNING][6290] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:39.342904 containerd[1982]: 2026-04-17 23:38:39.188 [INFO][6290] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Apr 17 23:38:39.342904 containerd[1982]: 2026-04-17 23:38:39.188 [INFO][6290] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" iface="eth0" netns="" Apr 17 23:38:39.342904 containerd[1982]: 2026-04-17 23:38:39.188 [INFO][6290] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Apr 17 23:38:39.342904 containerd[1982]: 2026-04-17 23:38:39.188 [INFO][6290] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Apr 17 23:38:39.342904 containerd[1982]: 2026-04-17 23:38:39.317 [INFO][6298] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" HandleID="k8s-pod-network.c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:39.342904 containerd[1982]: 2026-04-17 23:38:39.321 [INFO][6298] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:39.342904 containerd[1982]: 2026-04-17 23:38:39.322 [INFO][6298] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:39.342904 containerd[1982]: 2026-04-17 23:38:39.335 [WARNING][6298] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" HandleID="k8s-pod-network.c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:39.342904 containerd[1982]: 2026-04-17 23:38:39.336 [INFO][6298] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" HandleID="k8s-pod-network.c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:39.342904 containerd[1982]: 2026-04-17 23:38:39.337 [INFO][6298] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:39.342904 containerd[1982]: 2026-04-17 23:38:39.340 [INFO][6290] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Apr 17 23:38:39.342904 containerd[1982]: time="2026-04-17T23:38:39.342793358Z" level=info msg="TearDown network for sandbox \"c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b\" successfully" Apr 17 23:38:39.342904 containerd[1982]: time="2026-04-17T23:38:39.342823736Z" level=info msg="StopPodSandbox for \"c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b\" returns successfully" Apr 17 23:38:39.357499 containerd[1982]: time="2026-04-17T23:38:39.357437855Z" level=info msg="RemovePodSandbox for \"c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b\"" Apr 17 23:38:39.359541 containerd[1982]: time="2026-04-17T23:38:39.359496492Z" level=info msg="Forcibly stopping sandbox \"c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b\"" Apr 17 23:38:39.473106 sshd[6242]: Accepted publickey for core from 20.229.252.112 port 40674 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:38:39.488378 sshd[6242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:39.519761 systemd-logind[1959]: New session 8 of user core. Apr 17 23:38:39.526990 containerd[1982]: 2026-04-17 23:38:39.406 [WARNING][6312] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:39.526990 containerd[1982]: 2026-04-17 23:38:39.406 [INFO][6312] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Apr 17 23:38:39.526990 containerd[1982]: 2026-04-17 23:38:39.406 [INFO][6312] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" iface="eth0" netns="" Apr 17 23:38:39.526990 containerd[1982]: 2026-04-17 23:38:39.406 [INFO][6312] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Apr 17 23:38:39.526990 containerd[1982]: 2026-04-17 23:38:39.406 [INFO][6312] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Apr 17 23:38:39.526990 containerd[1982]: 2026-04-17 23:38:39.457 [INFO][6319] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" HandleID="k8s-pod-network.c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:39.526990 containerd[1982]: 2026-04-17 23:38:39.459 [INFO][6319] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:39.526990 containerd[1982]: 2026-04-17 23:38:39.459 [INFO][6319] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:39.526990 containerd[1982]: 2026-04-17 23:38:39.482 [WARNING][6319] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" HandleID="k8s-pod-network.c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:39.526990 containerd[1982]: 2026-04-17 23:38:39.482 [INFO][6319] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" HandleID="k8s-pod-network.c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:39.526990 containerd[1982]: 2026-04-17 23:38:39.488 [INFO][6319] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:39.526990 containerd[1982]: 2026-04-17 23:38:39.515 [INFO][6312] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b" Apr 17 23:38:39.531251 containerd[1982]: time="2026-04-17T23:38:39.527020733Z" level=info msg="TearDown network for sandbox \"c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b\" successfully" Apr 17 23:38:39.527904 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 17 23:38:39.565858 containerd[1982]: time="2026-04-17T23:38:39.565326597Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:38:39.565858 containerd[1982]: time="2026-04-17T23:38:39.565477340Z" level=info msg="RemovePodSandbox \"c0602ced4c2286efe32320ca78d4791fd9966906a9e4e054e862653c4745bd4b\" returns successfully" Apr 17 23:38:39.576335 containerd[1982]: time="2026-04-17T23:38:39.576114164Z" level=info msg="StopPodSandbox for \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\"" Apr 17 23:38:39.717566 containerd[1982]: 2026-04-17 23:38:39.639 [WARNING][6335] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:39.717566 containerd[1982]: 2026-04-17 23:38:39.639 [INFO][6335] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Apr 17 23:38:39.717566 containerd[1982]: 2026-04-17 23:38:39.639 [INFO][6335] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" iface="eth0" netns="" Apr 17 23:38:39.717566 containerd[1982]: 2026-04-17 23:38:39.639 [INFO][6335] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Apr 17 23:38:39.717566 containerd[1982]: 2026-04-17 23:38:39.639 [INFO][6335] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Apr 17 23:38:39.717566 containerd[1982]: 2026-04-17 23:38:39.681 [INFO][6342] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" HandleID="k8s-pod-network.d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:39.717566 containerd[1982]: 2026-04-17 23:38:39.681 [INFO][6342] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:39.717566 containerd[1982]: 2026-04-17 23:38:39.681 [INFO][6342] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:39.717566 containerd[1982]: 2026-04-17 23:38:39.690 [WARNING][6342] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" HandleID="k8s-pod-network.d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:39.717566 containerd[1982]: 2026-04-17 23:38:39.690 [INFO][6342] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" HandleID="k8s-pod-network.d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:39.717566 containerd[1982]: 2026-04-17 23:38:39.693 [INFO][6342] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:39.717566 containerd[1982]: 2026-04-17 23:38:39.696 [INFO][6335] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Apr 17 23:38:39.719114 containerd[1982]: time="2026-04-17T23:38:39.717594099Z" level=info msg="TearDown network for sandbox \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\" successfully" Apr 17 23:38:39.719114 containerd[1982]: time="2026-04-17T23:38:39.717627675Z" level=info msg="StopPodSandbox for \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\" returns successfully" Apr 17 23:38:39.720928 containerd[1982]: time="2026-04-17T23:38:39.720889152Z" level=info msg="RemovePodSandbox for \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\"" Apr 17 23:38:39.721041 containerd[1982]: time="2026-04-17T23:38:39.720940170Z" level=info msg="Forcibly stopping sandbox \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\"" Apr 17 23:38:39.838846 containerd[1982]: 2026-04-17 23:38:39.795 [WARNING][6356] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" WorkloadEndpoint="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:39.838846 containerd[1982]: 2026-04-17 23:38:39.795 [INFO][6356] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Apr 17 23:38:39.838846 containerd[1982]: 2026-04-17 23:38:39.795 [INFO][6356] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" iface="eth0" netns="" Apr 17 23:38:39.838846 containerd[1982]: 2026-04-17 23:38:39.795 [INFO][6356] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Apr 17 23:38:39.838846 containerd[1982]: 2026-04-17 23:38:39.795 [INFO][6356] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Apr 17 23:38:39.838846 containerd[1982]: 2026-04-17 23:38:39.822 [INFO][6363] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" HandleID="k8s-pod-network.d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:39.838846 containerd[1982]: 2026-04-17 23:38:39.822 [INFO][6363] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:39.838846 containerd[1982]: 2026-04-17 23:38:39.822 [INFO][6363] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:39.838846 containerd[1982]: 2026-04-17 23:38:39.828 [WARNING][6363] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" HandleID="k8s-pod-network.d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:39.838846 containerd[1982]: 2026-04-17 23:38:39.828 [INFO][6363] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" HandleID="k8s-pod-network.d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Workload="ip--172--31--16--109-k8s-whisker--65448f5dbb--ln8lt-eth0" Apr 17 23:38:39.838846 containerd[1982]: 2026-04-17 23:38:39.834 [INFO][6363] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:39.838846 containerd[1982]: 2026-04-17 23:38:39.836 [INFO][6356] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43" Apr 17 23:38:39.839853 containerd[1982]: time="2026-04-17T23:38:39.838913032Z" level=info msg="TearDown network for sandbox \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\" successfully" Apr 17 23:38:39.849707 containerd[1982]: time="2026-04-17T23:38:39.849661663Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:38:39.849857 containerd[1982]: time="2026-04-17T23:38:39.849750069Z" level=info msg="RemovePodSandbox \"d5d4f8f0fee61bc06cfe8cf3a616269604d773de29ef6a2703654873c7d50d43\" returns successfully" Apr 17 23:38:39.850391 containerd[1982]: time="2026-04-17T23:38:39.850354749Z" level=info msg="StopPodSandbox for \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\"" Apr 17 23:38:39.939965 containerd[1982]: 2026-04-17 23:38:39.900 [WARNING][6378] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc", Pod:"goldmane-5b85766d88-hzpfm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calica1f9d38416", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:39.939965 containerd[1982]: 2026-04-17 23:38:39.901 [INFO][6378] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Apr 17 23:38:39.939965 containerd[1982]: 2026-04-17 23:38:39.901 [INFO][6378] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" iface="eth0" netns="" Apr 17 23:38:39.939965 containerd[1982]: 2026-04-17 23:38:39.901 [INFO][6378] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Apr 17 23:38:39.939965 containerd[1982]: 2026-04-17 23:38:39.901 [INFO][6378] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Apr 17 23:38:39.939965 containerd[1982]: 2026-04-17 23:38:39.926 [INFO][6385] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" HandleID="k8s-pod-network.06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Workload="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0" Apr 17 23:38:39.939965 containerd[1982]: 2026-04-17 23:38:39.927 [INFO][6385] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:39.939965 containerd[1982]: 2026-04-17 23:38:39.927 [INFO][6385] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:39.939965 containerd[1982]: 2026-04-17 23:38:39.933 [WARNING][6385] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" HandleID="k8s-pod-network.06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Workload="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0" Apr 17 23:38:39.939965 containerd[1982]: 2026-04-17 23:38:39.933 [INFO][6385] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" HandleID="k8s-pod-network.06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Workload="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0" Apr 17 23:38:39.939965 containerd[1982]: 2026-04-17 23:38:39.935 [INFO][6385] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:39.939965 containerd[1982]: 2026-04-17 23:38:39.937 [INFO][6378] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Apr 17 23:38:39.939965 containerd[1982]: time="2026-04-17T23:38:39.939790879Z" level=info msg="TearDown network for sandbox \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\" successfully" Apr 17 23:38:39.939965 containerd[1982]: time="2026-04-17T23:38:39.939817240Z" level=info msg="StopPodSandbox for \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\" returns successfully" Apr 17 23:38:39.941745 containerd[1982]: time="2026-04-17T23:38:39.940330722Z" level=info msg="RemovePodSandbox for \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\"" Apr 17 23:38:39.941745 containerd[1982]: time="2026-04-17T23:38:39.940355424Z" level=info msg="Forcibly stopping sandbox \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\"" Apr 17 23:38:40.060283 ntpd[1954]: Listen normally on 12 calidcb52c98648 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 17 23:38:40.077199 ntpd[1954]: 17 Apr 23:38:40 ntpd[1954]: Listen normally on 12 calidcb52c98648 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 17 23:38:40.077199 ntpd[1954]: 17 Apr 23:38:40 ntpd[1954]: Listen normally on 13 calid2fde6422ae [fe80::ecee:eeff:feee:eeee%11]:123 Apr 17 23:38:40.077199 ntpd[1954]: 17 Apr 23:38:40 ntpd[1954]: Listen normally on 14 califc500cec28f [fe80::ecee:eeff:feee:eeee%12]:123 Apr 17 23:38:40.077199 ntpd[1954]: 17 Apr 23:38:40 ntpd[1954]: Listen normally on 15 calicb5486a39c1 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 17 23:38:40.077199 ntpd[1954]: 17 Apr 23:38:40 ntpd[1954]: Listen normally on 16 calica1f9d38416 [fe80::ecee:eeff:feee:eeee%14]:123 Apr 17 23:38:40.077199 ntpd[1954]: 17 Apr 23:38:40 ntpd[1954]: Listen normally on 17 cali904775ccd0c [fe80::ecee:eeff:feee:eeee%15]:123 Apr 17 23:38:40.077199 ntpd[1954]: 17 Apr 23:38:40 ntpd[1954]: Deleting interface #9 calid0b548b7bc8, fe80::ecee:eeff:feee:eeee%5#123, interface stats: received=0, sent=0, dropped=0, active_time=13 secs Apr 17 23:38:40.060363 ntpd[1954]: Listen normally on 13 calid2fde6422ae [fe80::ecee:eeff:feee:eeee%11]:123 Apr 17 23:38:40.060440 ntpd[1954]: Listen normally on 14 califc500cec28f [fe80::ecee:eeff:feee:eeee%12]:123 Apr 17 23:38:40.061061 ntpd[1954]: Listen normally on 15 calicb5486a39c1 [fe80::ecee:eeff:feee:eeee%13]:123 Apr 17 23:38:40.061179 ntpd[1954]: Listen normally on 16 calica1f9d38416 [fe80::ecee:eeff:feee:eeee%14]:123 Apr 17 23:38:40.061255 ntpd[1954]: Listen normally on 17 cali904775ccd0c [fe80::ecee:eeff:feee:eeee%15]:123 Apr 17 23:38:40.061330 ntpd[1954]: Deleting interface #9 calid0b548b7bc8, fe80::ecee:eeff:feee:eeee%5#123, interface stats: received=0, sent=0, dropped=0, active_time=13 secs Apr 17 23:38:40.112294 containerd[1982]: 2026-04-17 23:38:39.997 [WARNING][6399] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"ebc34f2a-f6b4-48f0-9c63-2ba7bb9bc6ac", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc", Pod:"goldmane-5b85766d88-hzpfm", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.2.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calica1f9d38416", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:40.112294 containerd[1982]: 2026-04-17 23:38:39.997 [INFO][6399] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Apr 17 23:38:40.112294 containerd[1982]: 2026-04-17 23:38:39.997 [INFO][6399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" iface="eth0" netns="" Apr 17 23:38:40.112294 containerd[1982]: 2026-04-17 23:38:39.997 [INFO][6399] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Apr 17 23:38:40.112294 containerd[1982]: 2026-04-17 23:38:39.997 [INFO][6399] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Apr 17 23:38:40.112294 containerd[1982]: 2026-04-17 23:38:40.090 [INFO][6414] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" HandleID="k8s-pod-network.06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Workload="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0" Apr 17 23:38:40.112294 containerd[1982]: 2026-04-17 23:38:40.090 [INFO][6414] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:40.112294 containerd[1982]: 2026-04-17 23:38:40.090 [INFO][6414] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:40.112294 containerd[1982]: 2026-04-17 23:38:40.104 [WARNING][6414] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" HandleID="k8s-pod-network.06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Workload="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0" Apr 17 23:38:40.112294 containerd[1982]: 2026-04-17 23:38:40.104 [INFO][6414] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" HandleID="k8s-pod-network.06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Workload="ip--172--31--16--109-k8s-goldmane--5b85766d88--hzpfm-eth0" Apr 17 23:38:40.112294 containerd[1982]: 2026-04-17 23:38:40.106 [INFO][6414] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:40.112294 containerd[1982]: 2026-04-17 23:38:40.109 [INFO][6399] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb" Apr 17 23:38:40.114204 containerd[1982]: time="2026-04-17T23:38:40.112855490Z" level=info msg="TearDown network for sandbox \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\" successfully" Apr 17 23:38:40.122038 containerd[1982]: time="2026-04-17T23:38:40.121984713Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:38:40.122197 containerd[1982]: time="2026-04-17T23:38:40.122078617Z" level=info msg="RemovePodSandbox \"06d0211c9b1e39f5f2911ed4ad347d5fa010422e3be9a8fd026c671d2560cacb\" returns successfully" Apr 17 23:38:40.122757 containerd[1982]: time="2026-04-17T23:38:40.122731897Z" level=info msg="StopPodSandbox for \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\"" Apr 17 23:38:40.247987 containerd[1982]: 2026-04-17 23:38:40.181 [WARNING][6444] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0", GenerateName:"calico-apiserver-84cf778744-", Namespace:"calico-system", SelfLink:"", UID:"1bc2dee4-01d4-4b29-b645-fc728a206f14", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84cf778744", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0", Pod:"calico-apiserver-84cf778744-ml2v7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califc500cec28f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:40.247987 containerd[1982]: 2026-04-17 23:38:40.182 [INFO][6444] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Apr 17 23:38:40.247987 containerd[1982]: 2026-04-17 23:38:40.182 [INFO][6444] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" iface="eth0" netns="" Apr 17 23:38:40.247987 containerd[1982]: 2026-04-17 23:38:40.182 [INFO][6444] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Apr 17 23:38:40.247987 containerd[1982]: 2026-04-17 23:38:40.182 [INFO][6444] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Apr 17 23:38:40.247987 containerd[1982]: 2026-04-17 23:38:40.231 [INFO][6451] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" HandleID="k8s-pod-network.409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0" Apr 17 23:38:40.247987 containerd[1982]: 2026-04-17 23:38:40.232 [INFO][6451] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:40.247987 containerd[1982]: 2026-04-17 23:38:40.232 [INFO][6451] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:40.247987 containerd[1982]: 2026-04-17 23:38:40.239 [WARNING][6451] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" HandleID="k8s-pod-network.409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0" Apr 17 23:38:40.247987 containerd[1982]: 2026-04-17 23:38:40.240 [INFO][6451] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" HandleID="k8s-pod-network.409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0" Apr 17 23:38:40.247987 containerd[1982]: 2026-04-17 23:38:40.242 [INFO][6451] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:40.247987 containerd[1982]: 2026-04-17 23:38:40.244 [INFO][6444] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Apr 17 23:38:40.247987 containerd[1982]: time="2026-04-17T23:38:40.247843384Z" level=info msg="TearDown network for sandbox \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\" successfully" Apr 17 23:38:40.247987 containerd[1982]: time="2026-04-17T23:38:40.247876916Z" level=info msg="StopPodSandbox for \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\" returns successfully" Apr 17 23:38:40.251050 containerd[1982]: time="2026-04-17T23:38:40.249120240Z" level=info msg="RemovePodSandbox for \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\"" Apr 17 23:38:40.251050 containerd[1982]: time="2026-04-17T23:38:40.249846108Z" level=info msg="Forcibly stopping sandbox \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\"" Apr 17 23:38:40.369129 containerd[1982]: 2026-04-17 23:38:40.311 [WARNING][6469] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0", GenerateName:"calico-apiserver-84cf778744-", Namespace:"calico-system", SelfLink:"", UID:"1bc2dee4-01d4-4b29-b645-fc728a206f14", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84cf778744", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"fe4a65a6eda144059f4a8cfeef8f540f7a93d8bee86177dc1364e41810455aa0", Pod:"calico-apiserver-84cf778744-ml2v7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"califc500cec28f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:40.369129 containerd[1982]: 2026-04-17 23:38:40.311 [INFO][6469] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Apr 17 23:38:40.369129 containerd[1982]: 2026-04-17 23:38:40.311 [INFO][6469] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" iface="eth0" netns="" Apr 17 23:38:40.369129 containerd[1982]: 2026-04-17 23:38:40.311 [INFO][6469] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Apr 17 23:38:40.369129 containerd[1982]: 2026-04-17 23:38:40.311 [INFO][6469] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Apr 17 23:38:40.369129 containerd[1982]: 2026-04-17 23:38:40.347 [INFO][6476] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" HandleID="k8s-pod-network.409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0" Apr 17 23:38:40.369129 containerd[1982]: 2026-04-17 23:38:40.347 [INFO][6476] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:40.369129 containerd[1982]: 2026-04-17 23:38:40.347 [INFO][6476] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:40.369129 containerd[1982]: 2026-04-17 23:38:40.361 [WARNING][6476] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" HandleID="k8s-pod-network.409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0" Apr 17 23:38:40.369129 containerd[1982]: 2026-04-17 23:38:40.361 [INFO][6476] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" HandleID="k8s-pod-network.409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--ml2v7-eth0" Apr 17 23:38:40.369129 containerd[1982]: 2026-04-17 23:38:40.363 [INFO][6476] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:40.369129 containerd[1982]: 2026-04-17 23:38:40.366 [INFO][6469] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9" Apr 17 23:38:40.369129 containerd[1982]: time="2026-04-17T23:38:40.369178366Z" level=info msg="TearDown network for sandbox \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\" successfully" Apr 17 23:38:40.403563 containerd[1982]: time="2026-04-17T23:38:40.402456136Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:38:40.403563 containerd[1982]: time="2026-04-17T23:38:40.402556074Z" level=info msg="RemovePodSandbox \"409ce44eb7070206af4f3bbef6a135004725ddb7335c98a6ee4a8737aab527f9\" returns successfully" Apr 17 23:38:40.403563 containerd[1982]: time="2026-04-17T23:38:40.403393388Z" level=info msg="StopPodSandbox for \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\"" Apr 17 23:38:40.540600 containerd[1982]: 2026-04-17 23:38:40.469 [WARNING][6490] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1f5613b0-e0ec-4ed6-ae4c-5a83ab1b043e", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6", Pod:"coredns-674b8bbfcf-c8jqp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcb52c98648", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:40.540600 containerd[1982]: 2026-04-17 23:38:40.469 [INFO][6490] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Apr 17 23:38:40.540600 containerd[1982]: 2026-04-17 23:38:40.469 [INFO][6490] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" iface="eth0" netns="" Apr 17 23:38:40.540600 containerd[1982]: 2026-04-17 23:38:40.469 [INFO][6490] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Apr 17 23:38:40.540600 containerd[1982]: 2026-04-17 23:38:40.469 [INFO][6490] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Apr 17 23:38:40.540600 containerd[1982]: 2026-04-17 23:38:40.513 [INFO][6498] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" HandleID="k8s-pod-network.8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0" Apr 17 23:38:40.540600 containerd[1982]: 2026-04-17 23:38:40.514 [INFO][6498] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:40.540600 containerd[1982]: 2026-04-17 23:38:40.514 [INFO][6498] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:40.540600 containerd[1982]: 2026-04-17 23:38:40.523 [WARNING][6498] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" HandleID="k8s-pod-network.8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0" Apr 17 23:38:40.540600 containerd[1982]: 2026-04-17 23:38:40.523 [INFO][6498] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" HandleID="k8s-pod-network.8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0" Apr 17 23:38:40.540600 containerd[1982]: 2026-04-17 23:38:40.530 [INFO][6498] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:40.540600 containerd[1982]: 2026-04-17 23:38:40.533 [INFO][6490] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Apr 17 23:38:40.540600 containerd[1982]: time="2026-04-17T23:38:40.540417050Z" level=info msg="TearDown network for sandbox \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\" successfully" Apr 17 23:38:40.540600 containerd[1982]: time="2026-04-17T23:38:40.540438992Z" level=info msg="StopPodSandbox for \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\" returns successfully" Apr 17 23:38:40.543151 containerd[1982]: time="2026-04-17T23:38:40.542824401Z" level=info msg="RemovePodSandbox for \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\"" Apr 17 23:38:40.543151 containerd[1982]: time="2026-04-17T23:38:40.542866692Z" level=info msg="Forcibly stopping sandbox \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\"" Apr 17 23:38:40.672887 containerd[1982]: 2026-04-17 23:38:40.606 [WARNING][6513] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1f5613b0-e0ec-4ed6-ae4c-5a83ab1b043e", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"076da6390eb2cb877470494c5db63597c63bc1ad093129809bb9a45507d4b0d6", Pod:"coredns-674b8bbfcf-c8jqp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidcb52c98648", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:40.672887 containerd[1982]: 2026-04-17 23:38:40.609 [INFO][6513] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Apr 17 23:38:40.672887 containerd[1982]: 2026-04-17 23:38:40.609 [INFO][6513] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" iface="eth0" netns="" Apr 17 23:38:40.672887 containerd[1982]: 2026-04-17 23:38:40.609 [INFO][6513] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Apr 17 23:38:40.672887 containerd[1982]: 2026-04-17 23:38:40.609 [INFO][6513] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Apr 17 23:38:40.672887 containerd[1982]: 2026-04-17 23:38:40.650 [INFO][6521] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" HandleID="k8s-pod-network.8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0" Apr 17 23:38:40.672887 containerd[1982]: 2026-04-17 23:38:40.652 [INFO][6521] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:40.672887 containerd[1982]: 2026-04-17 23:38:40.652 [INFO][6521] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:40.672887 containerd[1982]: 2026-04-17 23:38:40.664 [WARNING][6521] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" HandleID="k8s-pod-network.8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0" Apr 17 23:38:40.672887 containerd[1982]: 2026-04-17 23:38:40.664 [INFO][6521] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" HandleID="k8s-pod-network.8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--c8jqp-eth0" Apr 17 23:38:40.672887 containerd[1982]: 2026-04-17 23:38:40.666 [INFO][6521] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:40.672887 containerd[1982]: 2026-04-17 23:38:40.669 [INFO][6513] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4" Apr 17 23:38:40.672887 containerd[1982]: time="2026-04-17T23:38:40.672856523Z" level=info msg="TearDown network for sandbox \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\" successfully" Apr 17 23:38:40.685147 containerd[1982]: time="2026-04-17T23:38:40.685089888Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:38:40.685275 containerd[1982]: time="2026-04-17T23:38:40.685209090Z" level=info msg="RemovePodSandbox \"8076cc0401b1587e86a8a152be9931b0b78e76e98ab2600a2d733931b8404ca4\" returns successfully" Apr 17 23:38:40.687713 containerd[1982]: time="2026-04-17T23:38:40.687655066Z" level=info msg="StopPodSandbox for \"714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4\"" Apr 17 23:38:40.812372 containerd[1982]: 2026-04-17 23:38:40.759 [WARNING][6540] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ab6588a0-8817-4e9c-935c-c360a970fe47", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1", Pod:"coredns-674b8bbfcf-glngr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70aefc878c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:40.812372 containerd[1982]: 2026-04-17 23:38:40.759 [INFO][6540] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Apr 17 23:38:40.812372 containerd[1982]: 2026-04-17 23:38:40.759 [INFO][6540] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" iface="eth0" netns="" Apr 17 23:38:40.812372 containerd[1982]: 2026-04-17 23:38:40.759 [INFO][6540] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Apr 17 23:38:40.812372 containerd[1982]: 2026-04-17 23:38:40.759 [INFO][6540] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Apr 17 23:38:40.812372 containerd[1982]: 2026-04-17 23:38:40.796 [INFO][6550] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" HandleID="k8s-pod-network.714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0" Apr 17 23:38:40.812372 containerd[1982]: 2026-04-17 23:38:40.796 [INFO][6550] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:40.812372 containerd[1982]: 2026-04-17 23:38:40.796 [INFO][6550] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:40.812372 containerd[1982]: 2026-04-17 23:38:40.804 [WARNING][6550] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" HandleID="k8s-pod-network.714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0" Apr 17 23:38:40.812372 containerd[1982]: 2026-04-17 23:38:40.805 [INFO][6550] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" HandleID="k8s-pod-network.714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0" Apr 17 23:38:40.812372 containerd[1982]: 2026-04-17 23:38:40.807 [INFO][6550] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:40.812372 containerd[1982]: 2026-04-17 23:38:40.809 [INFO][6540] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Apr 17 23:38:40.813582 containerd[1982]: time="2026-04-17T23:38:40.812336667Z" level=info msg="TearDown network for sandbox \"714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4\" successfully" Apr 17 23:38:40.813582 containerd[1982]: time="2026-04-17T23:38:40.812802931Z" level=info msg="StopPodSandbox for \"714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4\" returns successfully" Apr 17 23:38:40.814054 containerd[1982]: time="2026-04-17T23:38:40.814019564Z" level=info msg="RemovePodSandbox for \"714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4\"" Apr 17 23:38:40.814124 containerd[1982]: time="2026-04-17T23:38:40.814053883Z" level=info msg="Forcibly stopping sandbox \"714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4\"" Apr 17 23:38:40.940790 containerd[1982]: 2026-04-17 23:38:40.871 [WARNING][6564] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"ab6588a0-8817-4e9c-935c-c360a970fe47", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"0a844eb0c08628ef923e7536ccf939e520035e7c31aaaf4a2b76c54fa924c3a1", Pod:"coredns-674b8bbfcf-glngr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.2.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali70aefc878c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:40.940790 containerd[1982]: 2026-04-17 23:38:40.871 [INFO][6564] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Apr 17 23:38:40.940790 containerd[1982]: 2026-04-17 23:38:40.871 [INFO][6564] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" iface="eth0" netns="" Apr 17 23:38:40.940790 containerd[1982]: 2026-04-17 23:38:40.871 [INFO][6564] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Apr 17 23:38:40.940790 containerd[1982]: 2026-04-17 23:38:40.871 [INFO][6564] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Apr 17 23:38:40.940790 containerd[1982]: 2026-04-17 23:38:40.912 [INFO][6571] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" HandleID="k8s-pod-network.714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0" Apr 17 23:38:40.940790 containerd[1982]: 2026-04-17 23:38:40.913 [INFO][6571] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:40.940790 containerd[1982]: 2026-04-17 23:38:40.913 [INFO][6571] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:40.940790 containerd[1982]: 2026-04-17 23:38:40.924 [WARNING][6571] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" HandleID="k8s-pod-network.714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0" Apr 17 23:38:40.940790 containerd[1982]: 2026-04-17 23:38:40.924 [INFO][6571] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" HandleID="k8s-pod-network.714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Workload="ip--172--31--16--109-k8s-coredns--674b8bbfcf--glngr-eth0" Apr 17 23:38:40.940790 containerd[1982]: 2026-04-17 23:38:40.927 [INFO][6571] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:40.940790 containerd[1982]: 2026-04-17 23:38:40.931 [INFO][6564] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4" Apr 17 23:38:40.940790 containerd[1982]: time="2026-04-17T23:38:40.940573795Z" level=info msg="TearDown network for sandbox \"714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4\" successfully" Apr 17 23:38:40.950385 containerd[1982]: time="2026-04-17T23:38:40.950325181Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:38:40.950527 containerd[1982]: time="2026-04-17T23:38:40.950480478Z" level=info msg="RemovePodSandbox \"714df8fe23b696b9cbfa535ad15b076d929ad265ff4003b017f3cf83f5034cd4\" returns successfully" Apr 17 23:38:40.951653 containerd[1982]: time="2026-04-17T23:38:40.951257316Z" level=info msg="StopPodSandbox for \"fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b\"" Apr 17 23:38:41.075751 containerd[1982]: 2026-04-17 23:38:41.009 [WARNING][6585] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0", GenerateName:"calico-kube-controllers-7c9ff59c-", Namespace:"calico-system", SelfLink:"", UID:"4a4f940d-c537-4a63-a007-4209639f0172", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c9ff59c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f", Pod:"calico-kube-controllers-7c9ff59c-q46w4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3c79a7898c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:41.075751 containerd[1982]: 2026-04-17 23:38:41.009 [INFO][6585] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Apr 17 23:38:41.075751 containerd[1982]: 2026-04-17 23:38:41.009 [INFO][6585] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" iface="eth0" netns="" Apr 17 23:38:41.075751 containerd[1982]: 2026-04-17 23:38:41.009 [INFO][6585] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Apr 17 23:38:41.075751 containerd[1982]: 2026-04-17 23:38:41.009 [INFO][6585] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Apr 17 23:38:41.075751 containerd[1982]: 2026-04-17 23:38:41.058 [INFO][6592] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" HandleID="k8s-pod-network.fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Workload="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0" Apr 17 23:38:41.075751 containerd[1982]: 2026-04-17 23:38:41.058 [INFO][6592] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:41.075751 containerd[1982]: 2026-04-17 23:38:41.058 [INFO][6592] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:41.075751 containerd[1982]: 2026-04-17 23:38:41.065 [WARNING][6592] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" HandleID="k8s-pod-network.fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Workload="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0" Apr 17 23:38:41.075751 containerd[1982]: 2026-04-17 23:38:41.065 [INFO][6592] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" HandleID="k8s-pod-network.fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Workload="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0" Apr 17 23:38:41.075751 containerd[1982]: 2026-04-17 23:38:41.067 [INFO][6592] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:41.075751 containerd[1982]: 2026-04-17 23:38:41.069 [INFO][6585] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Apr 17 23:38:41.084589 containerd[1982]: time="2026-04-17T23:38:41.083576272Z" level=info msg="TearDown network for sandbox \"fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b\" successfully" Apr 17 23:38:41.084589 containerd[1982]: time="2026-04-17T23:38:41.083646785Z" level=info msg="StopPodSandbox for \"fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b\" returns successfully" Apr 17 23:38:41.096960 containerd[1982]: time="2026-04-17T23:38:41.096911062Z" level=info msg="RemovePodSandbox for \"fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b\"" Apr 17 23:38:41.096960 containerd[1982]: time="2026-04-17T23:38:41.096953909Z" level=info msg="Forcibly stopping sandbox \"fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b\"" Apr 17 23:38:41.253242 sshd[6242]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:41.269153 systemd[1]: sshd@7-172.31.16.109:22-20.229.252.112:40674.service: Deactivated successfully. Apr 17 23:38:41.272933 systemd[1]: session-8.scope: Deactivated successfully. Apr 17 23:38:41.274831 systemd-logind[1959]: Session 8 logged out. Waiting for processes to exit. Apr 17 23:38:41.279143 systemd-logind[1959]: Removed session 8. Apr 17 23:38:41.286197 containerd[1982]: 2026-04-17 23:38:41.219 [WARNING][6606] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0", GenerateName:"calico-kube-controllers-7c9ff59c-", Namespace:"calico-system", SelfLink:"", UID:"4a4f940d-c537-4a63-a007-4209639f0172", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c9ff59c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"84320d358ccec3c12912d92c44cb4779f1c25cd32ecbe66eff1f047c6d91b31f", Pod:"calico-kube-controllers-7c9ff59c-q46w4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.2.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3c79a7898c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:41.286197 containerd[1982]: 2026-04-17 23:38:41.219 [INFO][6606] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Apr 17 23:38:41.286197 containerd[1982]: 2026-04-17 23:38:41.219 [INFO][6606] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" iface="eth0" netns="" Apr 17 23:38:41.286197 containerd[1982]: 2026-04-17 23:38:41.219 [INFO][6606] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Apr 17 23:38:41.286197 containerd[1982]: 2026-04-17 23:38:41.219 [INFO][6606] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Apr 17 23:38:41.286197 containerd[1982]: 2026-04-17 23:38:41.251 [INFO][6614] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" HandleID="k8s-pod-network.fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Workload="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0" Apr 17 23:38:41.286197 containerd[1982]: 2026-04-17 23:38:41.251 [INFO][6614] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:41.286197 containerd[1982]: 2026-04-17 23:38:41.251 [INFO][6614] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:41.286197 containerd[1982]: 2026-04-17 23:38:41.275 [WARNING][6614] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" HandleID="k8s-pod-network.fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Workload="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0" Apr 17 23:38:41.286197 containerd[1982]: 2026-04-17 23:38:41.276 [INFO][6614] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" HandleID="k8s-pod-network.fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Workload="ip--172--31--16--109-k8s-calico--kube--controllers--7c9ff59c--q46w4-eth0" Apr 17 23:38:41.286197 containerd[1982]: 2026-04-17 23:38:41.281 [INFO][6614] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:41.286197 containerd[1982]: 2026-04-17 23:38:41.284 [INFO][6606] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b" Apr 17 23:38:41.288161 containerd[1982]: time="2026-04-17T23:38:41.286239131Z" level=info msg="TearDown network for sandbox \"fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b\" successfully" Apr 17 23:38:41.292094 containerd[1982]: time="2026-04-17T23:38:41.292059077Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:38:41.292223 containerd[1982]: time="2026-04-17T23:38:41.292137369Z" level=info msg="RemovePodSandbox \"fdd027841236ca8297ba04f294a4f1ac6113a6c25c04004286749c4d56cfcc4b\" returns successfully" Apr 17 23:38:41.292798 containerd[1982]: time="2026-04-17T23:38:41.292770933Z" level=info msg="StopPodSandbox for \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\"" Apr 17 23:38:41.408846 containerd[1982]: 2026-04-17 23:38:41.361 [WARNING][6631] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0", GenerateName:"calico-apiserver-84cf778744-", Namespace:"calico-system", SelfLink:"", UID:"cec6c23d-ce9d-4878-9eb1-de1fc8bcdc4b", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84cf778744", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2", Pod:"calico-apiserver-84cf778744-tr4mm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali904775ccd0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:41.408846 containerd[1982]: 2026-04-17 23:38:41.363 [INFO][6631] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Apr 17 23:38:41.408846 containerd[1982]: 2026-04-17 23:38:41.363 [INFO][6631] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" iface="eth0" netns="" Apr 17 23:38:41.408846 containerd[1982]: 2026-04-17 23:38:41.363 [INFO][6631] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Apr 17 23:38:41.408846 containerd[1982]: 2026-04-17 23:38:41.363 [INFO][6631] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Apr 17 23:38:41.408846 containerd[1982]: 2026-04-17 23:38:41.392 [INFO][6639] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" HandleID="k8s-pod-network.cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0" Apr 17 23:38:41.408846 containerd[1982]: 2026-04-17 23:38:41.393 [INFO][6639] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:41.408846 containerd[1982]: 2026-04-17 23:38:41.393 [INFO][6639] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:41.408846 containerd[1982]: 2026-04-17 23:38:41.402 [WARNING][6639] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" HandleID="k8s-pod-network.cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0" Apr 17 23:38:41.408846 containerd[1982]: 2026-04-17 23:38:41.402 [INFO][6639] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" HandleID="k8s-pod-network.cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0" Apr 17 23:38:41.408846 containerd[1982]: 2026-04-17 23:38:41.404 [INFO][6639] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:41.408846 containerd[1982]: 2026-04-17 23:38:41.406 [INFO][6631] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Apr 17 23:38:41.409880 containerd[1982]: time="2026-04-17T23:38:41.408942353Z" level=info msg="TearDown network for sandbox \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\" successfully" Apr 17 23:38:41.409880 containerd[1982]: time="2026-04-17T23:38:41.408976577Z" level=info msg="StopPodSandbox for \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\" returns successfully" Apr 17 23:38:41.409880 containerd[1982]: time="2026-04-17T23:38:41.409631957Z" level=info msg="RemovePodSandbox for \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\"" Apr 17 23:38:41.409880 containerd[1982]: time="2026-04-17T23:38:41.409668387Z" level=info msg="Forcibly stopping sandbox \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\"" Apr 17 23:38:41.490941 containerd[1982]: 2026-04-17 23:38:41.449 [WARNING][6654] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0", GenerateName:"calico-apiserver-84cf778744-", Namespace:"calico-system", SelfLink:"", UID:"cec6c23d-ce9d-4878-9eb1-de1fc8bcdc4b", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84cf778744", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2", Pod:"calico-apiserver-84cf778744-tr4mm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.2.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali904775ccd0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:41.490941 containerd[1982]: 2026-04-17 23:38:41.450 [INFO][6654] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Apr 17 23:38:41.490941 containerd[1982]: 2026-04-17 23:38:41.450 [INFO][6654] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" iface="eth0" netns="" Apr 17 23:38:41.490941 containerd[1982]: 2026-04-17 23:38:41.450 [INFO][6654] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Apr 17 23:38:41.490941 containerd[1982]: 2026-04-17 23:38:41.450 [INFO][6654] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Apr 17 23:38:41.490941 containerd[1982]: 2026-04-17 23:38:41.475 [INFO][6661] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" HandleID="k8s-pod-network.cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0" Apr 17 23:38:41.490941 containerd[1982]: 2026-04-17 23:38:41.475 [INFO][6661] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:41.490941 containerd[1982]: 2026-04-17 23:38:41.475 [INFO][6661] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:41.490941 containerd[1982]: 2026-04-17 23:38:41.484 [WARNING][6661] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" HandleID="k8s-pod-network.cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0" Apr 17 23:38:41.490941 containerd[1982]: 2026-04-17 23:38:41.484 [INFO][6661] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" HandleID="k8s-pod-network.cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Workload="ip--172--31--16--109-k8s-calico--apiserver--84cf778744--tr4mm-eth0" Apr 17 23:38:41.490941 containerd[1982]: 2026-04-17 23:38:41.486 [INFO][6661] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:41.490941 containerd[1982]: 2026-04-17 23:38:41.488 [INFO][6654] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58" Apr 17 23:38:41.491611 containerd[1982]: time="2026-04-17T23:38:41.491194390Z" level=info msg="TearDown network for sandbox \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\" successfully" Apr 17 23:38:41.497885 containerd[1982]: time="2026-04-17T23:38:41.497839383Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:38:41.498029 containerd[1982]: time="2026-04-17T23:38:41.497930903Z" level=info msg="RemovePodSandbox \"cb1602ffa5d305b038dab5590e01ef0fb6486e24dd7eefda73ce8efdad092c58\" returns successfully" Apr 17 23:38:41.499852 containerd[1982]: time="2026-04-17T23:38:41.498421926Z" level=info msg="StopPodSandbox for \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\"" Apr 17 23:38:41.594091 containerd[1982]: 2026-04-17 23:38:41.555 [WARNING][6675] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36a96baf-e8bf-431a-9ddf-e9ecc28a4802", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b", Pod:"csi-node-driver-v54pl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicb5486a39c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:41.594091 containerd[1982]: 2026-04-17 23:38:41.555 [INFO][6675] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Apr 17 23:38:41.594091 containerd[1982]: 2026-04-17 23:38:41.555 [INFO][6675] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" iface="eth0" netns="" Apr 17 23:38:41.594091 containerd[1982]: 2026-04-17 23:38:41.555 [INFO][6675] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Apr 17 23:38:41.594091 containerd[1982]: 2026-04-17 23:38:41.555 [INFO][6675] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Apr 17 23:38:41.594091 containerd[1982]: 2026-04-17 23:38:41.581 [INFO][6682] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" HandleID="k8s-pod-network.e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Workload="ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0" Apr 17 23:38:41.594091 containerd[1982]: 2026-04-17 23:38:41.581 [INFO][6682] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:41.594091 containerd[1982]: 2026-04-17 23:38:41.581 [INFO][6682] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:41.594091 containerd[1982]: 2026-04-17 23:38:41.588 [WARNING][6682] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" HandleID="k8s-pod-network.e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Workload="ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0" Apr 17 23:38:41.594091 containerd[1982]: 2026-04-17 23:38:41.588 [INFO][6682] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" HandleID="k8s-pod-network.e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Workload="ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0" Apr 17 23:38:41.594091 containerd[1982]: 2026-04-17 23:38:41.590 [INFO][6682] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:41.594091 containerd[1982]: 2026-04-17 23:38:41.592 [INFO][6675] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Apr 17 23:38:41.594091 containerd[1982]: time="2026-04-17T23:38:41.594060368Z" level=info msg="TearDown network for sandbox \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\" successfully" Apr 17 23:38:41.594091 containerd[1982]: time="2026-04-17T23:38:41.594090923Z" level=info msg="StopPodSandbox for \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\" returns successfully" Apr 17 23:38:41.598898 containerd[1982]: time="2026-04-17T23:38:41.596949535Z" level=info msg="RemovePodSandbox for \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\"" Apr 17 23:38:41.598898 containerd[1982]: time="2026-04-17T23:38:41.596987963Z" level=info msg="Forcibly stopping sandbox \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\"" Apr 17 23:38:41.690273 containerd[1982]: 2026-04-17 23:38:41.644 [WARNING][6696] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"36a96baf-e8bf-431a-9ddf-e9ecc28a4802", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.April, 17, 23, 37, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-109", ContainerID:"f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b", Pod:"csi-node-driver-v54pl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.2.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicb5486a39c1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 17 23:38:41.690273 containerd[1982]: 2026-04-17 23:38:41.645 [INFO][6696] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Apr 17 23:38:41.690273 containerd[1982]: 2026-04-17 23:38:41.645 [INFO][6696] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" iface="eth0" netns="" Apr 17 23:38:41.690273 containerd[1982]: 2026-04-17 23:38:41.645 [INFO][6696] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Apr 17 23:38:41.690273 containerd[1982]: 2026-04-17 23:38:41.645 [INFO][6696] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Apr 17 23:38:41.690273 containerd[1982]: 2026-04-17 23:38:41.672 [INFO][6704] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" HandleID="k8s-pod-network.e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Workload="ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0" Apr 17 23:38:41.690273 containerd[1982]: 2026-04-17 23:38:41.672 [INFO][6704] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 17 23:38:41.690273 containerd[1982]: 2026-04-17 23:38:41.672 [INFO][6704] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 17 23:38:41.690273 containerd[1982]: 2026-04-17 23:38:41.679 [WARNING][6704] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" HandleID="k8s-pod-network.e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Workload="ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0" Apr 17 23:38:41.690273 containerd[1982]: 2026-04-17 23:38:41.679 [INFO][6704] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" HandleID="k8s-pod-network.e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Workload="ip--172--31--16--109-k8s-csi--node--driver--v54pl-eth0" Apr 17 23:38:41.690273 containerd[1982]: 2026-04-17 23:38:41.683 [INFO][6704] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 17 23:38:41.690273 containerd[1982]: 2026-04-17 23:38:41.687 [INFO][6696] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11" Apr 17 23:38:41.690273 containerd[1982]: time="2026-04-17T23:38:41.689494215Z" level=info msg="TearDown network for sandbox \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\" successfully" Apr 17 23:38:41.694147 containerd[1982]: time="2026-04-17T23:38:41.694100026Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 17 23:38:41.694327 containerd[1982]: time="2026-04-17T23:38:41.694191522Z" level=info msg="RemovePodSandbox \"e940a230362fc9353a579eed665dc019b15c8367d8068d552495f6bde1ff5b11\" returns successfully" Apr 17 23:38:42.941927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3657849111.mount: Deactivated successfully. Apr 17 23:38:43.591143 containerd[1982]: time="2026-04-17T23:38:43.591048010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:43.607032 containerd[1982]: time="2026-04-17T23:38:43.594163834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 17 23:38:43.607032 containerd[1982]: time="2026-04-17T23:38:43.595900111Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:43.607645 containerd[1982]: time="2026-04-17T23:38:43.601052210Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 4.987815998s" Apr 17 23:38:43.607645 containerd[1982]: time="2026-04-17T23:38:43.607350012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 17 23:38:43.608234 containerd[1982]: time="2026-04-17T23:38:43.608198200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:43.636822 containerd[1982]: time="2026-04-17T23:38:43.636779020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 17 23:38:43.840068 containerd[1982]: time="2026-04-17T23:38:43.840019733Z" level=info msg="CreateContainer within sandbox \"973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 17 23:38:43.870895 containerd[1982]: time="2026-04-17T23:38:43.870849285Z" level=info msg="CreateContainer within sandbox \"973458995d83cfb07e431d2826a7c7452778339cd6c65d1338c74834625122cc\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"fd45fcfbfc37dbeab971ba2d20c98501298aef55466c02ee9ed5f19fd7542038\"" Apr 17 23:38:43.877644 containerd[1982]: time="2026-04-17T23:38:43.877594918Z" level=info msg="StartContainer for \"fd45fcfbfc37dbeab971ba2d20c98501298aef55466c02ee9ed5f19fd7542038\"" Apr 17 23:38:43.989890 containerd[1982]: time="2026-04-17T23:38:43.989444408Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:44.049252 containerd[1982]: time="2026-04-17T23:38:44.048910319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 17 23:38:44.055080 containerd[1982]: time="2026-04-17T23:38:44.054103339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 417.281653ms" Apr 17 23:38:44.055080 containerd[1982]: time="2026-04-17T23:38:44.054172717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 17 23:38:44.079997 containerd[1982]: time="2026-04-17T23:38:44.071868442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 17 23:38:44.084216 containerd[1982]: time="2026-04-17T23:38:44.084121833Z" level=info msg="CreateContainer within sandbox \"44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 17 23:38:44.205511 containerd[1982]: time="2026-04-17T23:38:44.201909043Z" level=info msg="CreateContainer within sandbox \"44f51617e2b3944a489cf557e72f1b4c100de6185250fa05991960953792b3d2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c6f53dd5b5883048973b9256410cd723553ca059050747b50289b4f1f444e7d2\"" Apr 17 23:38:44.268174 containerd[1982]: time="2026-04-17T23:38:44.268122903Z" level=info msg="StartContainer for \"c6f53dd5b5883048973b9256410cd723553ca059050747b50289b4f1f444e7d2\"" Apr 17 23:38:44.409472 systemd[1]: Started cri-containerd-c6f53dd5b5883048973b9256410cd723553ca059050747b50289b4f1f444e7d2.scope - libcontainer container c6f53dd5b5883048973b9256410cd723553ca059050747b50289b4f1f444e7d2. Apr 17 23:38:44.419256 systemd[1]: Started cri-containerd-fd45fcfbfc37dbeab971ba2d20c98501298aef55466c02ee9ed5f19fd7542038.scope - libcontainer container fd45fcfbfc37dbeab971ba2d20c98501298aef55466c02ee9ed5f19fd7542038. Apr 17 23:38:44.671929 containerd[1982]: time="2026-04-17T23:38:44.671811166Z" level=info msg="StartContainer for \"fd45fcfbfc37dbeab971ba2d20c98501298aef55466c02ee9ed5f19fd7542038\" returns successfully" Apr 17 23:38:44.674335 containerd[1982]: time="2026-04-17T23:38:44.673977965Z" level=info msg="StartContainer for \"c6f53dd5b5883048973b9256410cd723553ca059050747b50289b4f1f444e7d2\" returns successfully" Apr 17 23:38:45.033644 kubelet[3213]: I0417 23:38:45.015340 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-hzpfm" podStartSLOduration=39.277455947 podStartE2EDuration="48.941058047s" podCreationTimestamp="2026-04-17 23:37:56 +0000 UTC" firstStartedPulling="2026-04-17 23:38:33.972769171 +0000 UTC m=+56.145081786" lastFinishedPulling="2026-04-17 23:38:43.636371257 +0000 UTC m=+65.808683886" observedRunningTime="2026-04-17 23:38:44.881508354 +0000 UTC m=+67.053820986" watchObservedRunningTime="2026-04-17 23:38:44.941058047 +0000 UTC m=+67.113370678" Apr 17 23:38:46.309768 containerd[1982]: time="2026-04-17T23:38:46.308678612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:46.311942 containerd[1982]: time="2026-04-17T23:38:46.311661507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 17 23:38:46.312581 containerd[1982]: time="2026-04-17T23:38:46.312554816Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:46.320716 containerd[1982]: time="2026-04-17T23:38:46.320304798Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 17 23:38:46.328388 containerd[1982]: time="2026-04-17T23:38:46.328135880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.256220079s" Apr 17 23:38:46.328388 containerd[1982]: time="2026-04-17T23:38:46.328182376Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 17 23:38:46.417972 kubelet[3213]: I0417 23:38:46.417412 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-84cf778744-tr4mm" podStartSLOduration=42.2030367 podStartE2EDuration="50.417381292s" podCreationTimestamp="2026-04-17 23:37:56 +0000 UTC" firstStartedPulling="2026-04-17 23:38:35.856165884 +0000 UTC m=+58.028478507" lastFinishedPulling="2026-04-17 23:38:44.070510482 +0000 UTC m=+66.242823099" observedRunningTime="2026-04-17 23:38:45.935562239 +0000 UTC m=+68.107874871" watchObservedRunningTime="2026-04-17 23:38:46.417381292 +0000 UTC m=+68.589693947" Apr 17 23:38:46.489152 systemd[1]: Started sshd@8-172.31.16.109:22-20.229.252.112:55900.service - OpenSSH per-connection server daemon (20.229.252.112:55900). Apr 17 23:38:46.732762 containerd[1982]: time="2026-04-17T23:38:46.731455913Z" level=info msg="CreateContainer within sandbox \"f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 17 23:38:46.760324 containerd[1982]: time="2026-04-17T23:38:46.760035761Z" level=info msg="CreateContainer within sandbox \"f53385e6ee8e533f1a4224605832c1f520e6969ea35bace6c649d584b542283b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7b83a75cc18243dbed038e0fbfa34dcf00959ddca6780ffa3136a3110428ce87\"" Apr 17 23:38:46.760923 containerd[1982]: time="2026-04-17T23:38:46.760889595Z" level=info msg="StartContainer for \"7b83a75cc18243dbed038e0fbfa34dcf00959ddca6780ffa3136a3110428ce87\"" Apr 17 23:38:46.825139 systemd[1]: Started cri-containerd-7b83a75cc18243dbed038e0fbfa34dcf00959ddca6780ffa3136a3110428ce87.scope - libcontainer container 7b83a75cc18243dbed038e0fbfa34dcf00959ddca6780ffa3136a3110428ce87. Apr 17 23:38:46.872777 containerd[1982]: time="2026-04-17T23:38:46.872067521Z" level=info msg="StartContainer for \"7b83a75cc18243dbed038e0fbfa34dcf00959ddca6780ffa3136a3110428ce87\" returns successfully" Apr 17 23:38:46.998128 systemd[1]: run-containerd-runc-k8s.io-7b83a75cc18243dbed038e0fbfa34dcf00959ddca6780ffa3136a3110428ce87-runc.nISpWz.mount: Deactivated successfully. Apr 17 23:38:47.317343 kubelet[3213]: I0417 23:38:47.312726 3213 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 17 23:38:47.320623 kubelet[3213]: I0417 23:38:47.320532 3213 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 17 23:38:47.692569 sshd[6868]: Accepted publickey for core from 20.229.252.112 port 55900 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:38:47.699099 sshd[6868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:47.713288 systemd-logind[1959]: New session 9 of user core. Apr 17 23:38:47.716933 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 17 23:38:47.847862 kubelet[3213]: I0417 23:38:47.847782 3213 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-v54pl" podStartSLOduration=38.313307065 podStartE2EDuration="50.847757422s" podCreationTimestamp="2026-04-17 23:37:57 +0000 UTC" firstStartedPulling="2026-04-17 23:38:33.852912725 +0000 UTC m=+56.025225334" lastFinishedPulling="2026-04-17 23:38:46.387363069 +0000 UTC m=+68.559675691" observedRunningTime="2026-04-17 23:38:46.939287983 +0000 UTC m=+69.111600618" watchObservedRunningTime="2026-04-17 23:38:47.847757422 +0000 UTC m=+70.020070054" Apr 17 23:38:49.196970 sshd[6868]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:49.202980 systemd-logind[1959]: Session 9 logged out. Waiting for processes to exit. Apr 17 23:38:49.204071 systemd[1]: sshd@8-172.31.16.109:22-20.229.252.112:55900.service: Deactivated successfully. Apr 17 23:38:49.208556 systemd[1]: session-9.scope: Deactivated successfully. Apr 17 23:38:49.212957 systemd-logind[1959]: Removed session 9. Apr 17 23:38:51.588551 systemd[1]: run-containerd-runc-k8s.io-42b56c593fa17678a456fa58dd32fc97deb184654651a28f31f14544986efe10-runc.d71WRZ.mount: Deactivated successfully. Apr 17 23:38:53.783598 kubelet[3213]: I0417 23:38:53.783555 3213 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 17 23:38:54.371611 systemd[1]: Started sshd@9-172.31.16.109:22-20.229.252.112:55910.service - OpenSSH per-connection server daemon (20.229.252.112:55910). Apr 17 23:38:55.454034 sshd[6947]: Accepted publickey for core from 20.229.252.112 port 55910 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:38:55.458254 sshd[6947]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:38:55.464795 systemd-logind[1959]: New session 10 of user core. Apr 17 23:38:55.469895 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 17 23:38:55.702339 systemd[1]: run-containerd-runc-k8s.io-928805af13dae477ab48381d3d48cde0b30dcb395f6a36220baf0fdcf24e5584-runc.aZMgHW.mount: Deactivated successfully. Apr 17 23:38:56.791677 sshd[6947]: pam_unix(sshd:session): session closed for user core Apr 17 23:38:56.798326 systemd[1]: sshd@9-172.31.16.109:22-20.229.252.112:55910.service: Deactivated successfully. Apr 17 23:38:56.801012 systemd[1]: session-10.scope: Deactivated successfully. Apr 17 23:38:56.803166 systemd-logind[1959]: Session 10 logged out. Waiting for processes to exit. Apr 17 23:38:56.805211 systemd-logind[1959]: Removed session 10. Apr 17 23:39:01.976765 systemd[1]: Started sshd@10-172.31.16.109:22-20.229.252.112:43532.service - OpenSSH per-connection server daemon (20.229.252.112:43532). Apr 17 23:39:03.049001 sshd[7006]: Accepted publickey for core from 20.229.252.112 port 43532 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:39:03.050825 sshd[7006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:03.055923 systemd-logind[1959]: New session 11 of user core. Apr 17 23:39:03.061896 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 17 23:39:04.159728 sshd[7006]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:04.164372 systemd-logind[1959]: Session 11 logged out. Waiting for processes to exit. Apr 17 23:39:04.165458 systemd[1]: sshd@10-172.31.16.109:22-20.229.252.112:43532.service: Deactivated successfully. Apr 17 23:39:04.168224 systemd[1]: session-11.scope: Deactivated successfully. Apr 17 23:39:04.169889 systemd-logind[1959]: Removed session 11. Apr 17 23:39:04.333087 systemd[1]: Started sshd@11-172.31.16.109:22-20.229.252.112:43542.service - OpenSSH per-connection server daemon (20.229.252.112:43542). Apr 17 23:39:05.359197 sshd[7022]: Accepted publickey for core from 20.229.252.112 port 43542 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:39:05.360938 sshd[7022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:05.365607 systemd-logind[1959]: New session 12 of user core. Apr 17 23:39:05.369908 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 17 23:39:06.182983 sshd[7022]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:06.190798 systemd[1]: sshd@11-172.31.16.109:22-20.229.252.112:43542.service: Deactivated successfully. Apr 17 23:39:06.193386 systemd[1]: session-12.scope: Deactivated successfully. Apr 17 23:39:06.195594 systemd-logind[1959]: Session 12 logged out. Waiting for processes to exit. Apr 17 23:39:06.197338 systemd-logind[1959]: Removed session 12. Apr 17 23:39:06.358108 systemd[1]: Started sshd@12-172.31.16.109:22-20.229.252.112:39056.service - OpenSSH per-connection server daemon (20.229.252.112:39056). Apr 17 23:39:07.411142 sshd[7041]: Accepted publickey for core from 20.229.252.112 port 39056 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:39:07.419900 sshd[7041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:07.428084 systemd-logind[1959]: New session 13 of user core. Apr 17 23:39:07.433951 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 17 23:39:08.365831 sshd[7041]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:08.370676 systemd[1]: sshd@12-172.31.16.109:22-20.229.252.112:39056.service: Deactivated successfully. Apr 17 23:39:08.373441 systemd[1]: session-13.scope: Deactivated successfully. Apr 17 23:39:08.374581 systemd-logind[1959]: Session 13 logged out. Waiting for processes to exit. Apr 17 23:39:08.376118 systemd-logind[1959]: Removed session 13. Apr 17 23:39:13.549467 systemd[1]: Started sshd@13-172.31.16.109:22-20.229.252.112:39062.service - OpenSSH per-connection server daemon (20.229.252.112:39062). Apr 17 23:39:14.621434 sshd[7054]: Accepted publickey for core from 20.229.252.112 port 39062 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:39:14.627792 sshd[7054]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:14.632535 systemd-logind[1959]: New session 14 of user core. Apr 17 23:39:14.636882 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 17 23:39:15.852882 sshd[7054]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:15.865219 systemd[1]: sshd@13-172.31.16.109:22-20.229.252.112:39062.service: Deactivated successfully. Apr 17 23:39:15.867585 systemd[1]: session-14.scope: Deactivated successfully. Apr 17 23:39:15.869289 systemd-logind[1959]: Session 14 logged out. Waiting for processes to exit. Apr 17 23:39:15.870963 systemd-logind[1959]: Removed session 14. Apr 17 23:39:16.037179 systemd[1]: Started sshd@14-172.31.16.109:22-20.229.252.112:45686.service - OpenSSH per-connection server daemon (20.229.252.112:45686). Apr 17 23:39:17.108083 sshd[7090]: Accepted publickey for core from 20.229.252.112 port 45686 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:39:17.115288 sshd[7090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:17.121010 systemd-logind[1959]: New session 15 of user core. Apr 17 23:39:17.125891 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 17 23:39:18.336349 sshd[7090]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:18.344812 systemd[1]: sshd@14-172.31.16.109:22-20.229.252.112:45686.service: Deactivated successfully. Apr 17 23:39:18.347591 systemd[1]: session-15.scope: Deactivated successfully. Apr 17 23:39:18.349972 systemd-logind[1959]: Session 15 logged out. Waiting for processes to exit. Apr 17 23:39:18.351251 systemd-logind[1959]: Removed session 15. Apr 17 23:39:18.517070 systemd[1]: Started sshd@15-172.31.16.109:22-20.229.252.112:45702.service - OpenSSH per-connection server daemon (20.229.252.112:45702). Apr 17 23:39:19.569743 sshd[7107]: Accepted publickey for core from 20.229.252.112 port 45702 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:39:19.570627 sshd[7107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:19.575968 systemd-logind[1959]: New session 16 of user core. Apr 17 23:39:19.581928 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 17 23:39:21.064803 sshd[7107]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:21.069337 systemd[1]: sshd@15-172.31.16.109:22-20.229.252.112:45702.service: Deactivated successfully. Apr 17 23:39:21.072580 systemd[1]: session-16.scope: Deactivated successfully. Apr 17 23:39:21.074455 systemd-logind[1959]: Session 16 logged out. Waiting for processes to exit. Apr 17 23:39:21.076634 systemd-logind[1959]: Removed session 16. Apr 17 23:39:21.125110 systemd[1]: run-containerd-runc-k8s.io-fd45fcfbfc37dbeab971ba2d20c98501298aef55466c02ee9ed5f19fd7542038-runc.K8dQTZ.mount: Deactivated successfully. Apr 17 23:39:21.239522 systemd[1]: Started sshd@16-172.31.16.109:22-20.229.252.112:45714.service - OpenSSH per-connection server daemon (20.229.252.112:45714). Apr 17 23:39:21.579627 systemd[1]: run-containerd-runc-k8s.io-42b56c593fa17678a456fa58dd32fc97deb184654651a28f31f14544986efe10-runc.6H27KA.mount: Deactivated successfully. Apr 17 23:39:22.271251 sshd[7157]: Accepted publickey for core from 20.229.252.112 port 45714 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:39:22.274531 sshd[7157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:22.285797 systemd-logind[1959]: New session 17 of user core. Apr 17 23:39:22.289901 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 17 23:39:24.010543 sshd[7157]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:24.014485 systemd[1]: sshd@16-172.31.16.109:22-20.229.252.112:45714.service: Deactivated successfully. Apr 17 23:39:24.016676 systemd[1]: session-17.scope: Deactivated successfully. Apr 17 23:39:24.018538 systemd-logind[1959]: Session 17 logged out. Waiting for processes to exit. Apr 17 23:39:24.020446 systemd-logind[1959]: Removed session 17. Apr 17 23:39:24.193504 systemd[1]: Started sshd@17-172.31.16.109:22-20.229.252.112:45728.service - OpenSSH per-connection server daemon (20.229.252.112:45728). Apr 17 23:39:25.256873 sshd[7188]: Accepted publickey for core from 20.229.252.112 port 45728 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:39:25.258914 sshd[7188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:25.265760 systemd-logind[1959]: New session 18 of user core. Apr 17 23:39:25.270916 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 17 23:39:26.191020 sshd[7188]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:26.194842 systemd[1]: sshd@17-172.31.16.109:22-20.229.252.112:45728.service: Deactivated successfully. Apr 17 23:39:26.197423 systemd[1]: session-18.scope: Deactivated successfully. Apr 17 23:39:26.199379 systemd-logind[1959]: Session 18 logged out. Waiting for processes to exit. Apr 17 23:39:26.201128 systemd-logind[1959]: Removed session 18. Apr 17 23:39:31.367987 systemd[1]: Started sshd@18-172.31.16.109:22-20.229.252.112:58702.service - OpenSSH per-connection server daemon (20.229.252.112:58702). Apr 17 23:39:32.449051 sshd[7222]: Accepted publickey for core from 20.229.252.112 port 58702 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:39:32.452469 sshd[7222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:32.457770 systemd-logind[1959]: New session 19 of user core. Apr 17 23:39:32.464882 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 17 23:39:33.469786 sshd[7222]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:33.485227 systemd[1]: sshd@18-172.31.16.109:22-20.229.252.112:58702.service: Deactivated successfully. Apr 17 23:39:33.489919 systemd[1]: session-19.scope: Deactivated successfully. Apr 17 23:39:33.498056 systemd-logind[1959]: Session 19 logged out. Waiting for processes to exit. Apr 17 23:39:33.500726 systemd-logind[1959]: Removed session 19. Apr 17 23:39:38.637752 systemd[1]: Started sshd@19-172.31.16.109:22-20.229.252.112:34484.service - OpenSSH per-connection server daemon (20.229.252.112:34484). Apr 17 23:39:39.689561 sshd[7241]: Accepted publickey for core from 20.229.252.112 port 34484 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:39:39.693567 sshd[7241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:39.701100 systemd-logind[1959]: New session 20 of user core. Apr 17 23:39:39.706917 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 17 23:39:40.008464 systemd[1]: run-containerd-runc-k8s.io-928805af13dae477ab48381d3d48cde0b30dcb395f6a36220baf0fdcf24e5584-runc.hii3UX.mount: Deactivated successfully. Apr 17 23:39:40.737739 sshd[7241]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:40.743249 systemd[1]: sshd@19-172.31.16.109:22-20.229.252.112:34484.service: Deactivated successfully. Apr 17 23:39:40.745752 systemd[1]: session-20.scope: Deactivated successfully. Apr 17 23:39:40.747605 systemd-logind[1959]: Session 20 logged out. Waiting for processes to exit. Apr 17 23:39:40.749920 systemd-logind[1959]: Removed session 20. Apr 17 23:39:45.912351 systemd[1]: Started sshd@20-172.31.16.109:22-20.229.252.112:52928.service - OpenSSH per-connection server daemon (20.229.252.112:52928). Apr 17 23:39:46.932071 sshd[7290]: Accepted publickey for core from 20.229.252.112 port 52928 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:39:46.939880 sshd[7290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:46.945960 systemd-logind[1959]: New session 21 of user core. Apr 17 23:39:46.952015 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 17 23:39:47.758224 sshd[7290]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:47.769017 systemd-logind[1959]: Session 21 logged out. Waiting for processes to exit. Apr 17 23:39:47.770071 systemd[1]: sshd@20-172.31.16.109:22-20.229.252.112:52928.service: Deactivated successfully. Apr 17 23:39:47.772465 systemd[1]: session-21.scope: Deactivated successfully. Apr 17 23:39:47.773973 systemd-logind[1959]: Removed session 21. Apr 17 23:39:52.936365 systemd[1]: Started sshd@21-172.31.16.109:22-20.229.252.112:52934.service - OpenSSH per-connection server daemon (20.229.252.112:52934). Apr 17 23:39:53.987946 sshd[7337]: Accepted publickey for core from 20.229.252.112 port 52934 ssh2: RSA SHA256:/JnJeuch0+dBe+734qwhVG1s2LEEHG3o+oYbjCsPr1w Apr 17 23:39:53.991460 sshd[7337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 17 23:39:53.997420 systemd-logind[1959]: New session 22 of user core. Apr 17 23:39:54.001921 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 17 23:39:54.957179 sshd[7337]: pam_unix(sshd:session): session closed for user core Apr 17 23:39:54.962461 systemd-logind[1959]: Session 22 logged out. Waiting for processes to exit. Apr 17 23:39:54.963058 systemd[1]: sshd@21-172.31.16.109:22-20.229.252.112:52934.service: Deactivated successfully. Apr 17 23:39:54.965589 systemd[1]: session-22.scope: Deactivated successfully. Apr 17 23:39:54.966684 systemd-logind[1959]: Removed session 22. Apr 17 23:40:09.332578 systemd[1]: cri-containerd-d2ab2507103fd1b2cc756b30746cf6ee037f3f443144e176bf9426af012efb62.scope: Deactivated successfully. Apr 17 23:40:09.333321 systemd[1]: cri-containerd-d2ab2507103fd1b2cc756b30746cf6ee037f3f443144e176bf9426af012efb62.scope: Consumed 3.508s CPU time, 18.4M memory peak, 0B memory swap peak. Apr 17 23:40:09.597136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d2ab2507103fd1b2cc756b30746cf6ee037f3f443144e176bf9426af012efb62-rootfs.mount: Deactivated successfully. Apr 17 23:40:09.709795 containerd[1982]: time="2026-04-17T23:40:09.655849538Z" level=info msg="shim disconnected" id=d2ab2507103fd1b2cc756b30746cf6ee037f3f443144e176bf9426af012efb62 namespace=k8s.io Apr 17 23:40:09.713175 containerd[1982]: time="2026-04-17T23:40:09.709801192Z" level=warning msg="cleaning up after shim disconnected" id=d2ab2507103fd1b2cc756b30746cf6ee037f3f443144e176bf9426af012efb62 namespace=k8s.io Apr 17 23:40:09.713175 containerd[1982]: time="2026-04-17T23:40:09.709826268Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:40:10.363046 systemd[1]: cri-containerd-5bca24549dfd5695a1909fad3cea80f7df881e947fe63616019c2ae5806be1c1.scope: Deactivated successfully. Apr 17 23:40:10.363920 systemd[1]: cri-containerd-5bca24549dfd5695a1909fad3cea80f7df881e947fe63616019c2ae5806be1c1.scope: Consumed 7.191s CPU time. Apr 17 23:40:10.392928 containerd[1982]: time="2026-04-17T23:40:10.392642242Z" level=info msg="shim disconnected" id=5bca24549dfd5695a1909fad3cea80f7df881e947fe63616019c2ae5806be1c1 namespace=k8s.io Apr 17 23:40:10.392928 containerd[1982]: time="2026-04-17T23:40:10.392770160Z" level=warning msg="cleaning up after shim disconnected" id=5bca24549dfd5695a1909fad3cea80f7df881e947fe63616019c2ae5806be1c1 namespace=k8s.io Apr 17 23:40:10.392928 containerd[1982]: time="2026-04-17T23:40:10.392785665Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:40:10.396159 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5bca24549dfd5695a1909fad3cea80f7df881e947fe63616019c2ae5806be1c1-rootfs.mount: Deactivated successfully. Apr 17 23:40:10.720369 kubelet[3213]: I0417 23:40:10.720228 3213 scope.go:117] "RemoveContainer" containerID="5bca24549dfd5695a1909fad3cea80f7df881e947fe63616019c2ae5806be1c1" Apr 17 23:40:10.729040 kubelet[3213]: I0417 23:40:10.728853 3213 scope.go:117] "RemoveContainer" containerID="d2ab2507103fd1b2cc756b30746cf6ee037f3f443144e176bf9426af012efb62" Apr 17 23:40:10.856397 containerd[1982]: time="2026-04-17T23:40:10.856330374Z" level=info msg="CreateContainer within sandbox \"d73d2f5beea2dc202d72ff135757dd95c86675589d2d2b2d429a8ada2044cc2b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Apr 17 23:40:10.857283 containerd[1982]: time="2026-04-17T23:40:10.856948085Z" level=info msg="CreateContainer within sandbox \"0c93f5b772d442382b948af32027c33a193fd486c9615d72f5b3eb6c1bcf8e6d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Apr 17 23:40:10.997071 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount875524182.mount: Deactivated successfully. Apr 17 23:40:11.021706 kubelet[3213]: E0417 23:40:11.020717 3213 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-16-109)" Apr 17 23:40:11.030056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4047545392.mount: Deactivated successfully. Apr 17 23:40:11.051514 containerd[1982]: time="2026-04-17T23:40:11.051356309Z" level=info msg="CreateContainer within sandbox \"d73d2f5beea2dc202d72ff135757dd95c86675589d2d2b2d429a8ada2044cc2b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"3d78d22d002674530fe253c906fdc763f38762c12d369a8f053aba1ebea35d8f\"" Apr 17 23:40:11.055961 containerd[1982]: time="2026-04-17T23:40:11.055892454Z" level=info msg="CreateContainer within sandbox \"0c93f5b772d442382b948af32027c33a193fd486c9615d72f5b3eb6c1bcf8e6d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"d50b7189ac4833058b61c1847c4fd9cfba1cacf50f39e54774203b21fc596e46\"" Apr 17 23:40:11.057723 containerd[1982]: time="2026-04-17T23:40:11.056480948Z" level=info msg="StartContainer for \"d50b7189ac4833058b61c1847c4fd9cfba1cacf50f39e54774203b21fc596e46\"" Apr 17 23:40:11.058983 containerd[1982]: time="2026-04-17T23:40:11.058960511Z" level=info msg="StartContainer for \"3d78d22d002674530fe253c906fdc763f38762c12d369a8f053aba1ebea35d8f\"" Apr 17 23:40:11.121897 systemd[1]: Started cri-containerd-d50b7189ac4833058b61c1847c4fd9cfba1cacf50f39e54774203b21fc596e46.scope - libcontainer container d50b7189ac4833058b61c1847c4fd9cfba1cacf50f39e54774203b21fc596e46. Apr 17 23:40:11.135006 systemd[1]: Started cri-containerd-3d78d22d002674530fe253c906fdc763f38762c12d369a8f053aba1ebea35d8f.scope - libcontainer container 3d78d22d002674530fe253c906fdc763f38762c12d369a8f053aba1ebea35d8f. Apr 17 23:40:11.194991 containerd[1982]: time="2026-04-17T23:40:11.194948349Z" level=info msg="StartContainer for \"3d78d22d002674530fe253c906fdc763f38762c12d369a8f053aba1ebea35d8f\" returns successfully" Apr 17 23:40:11.222485 containerd[1982]: time="2026-04-17T23:40:11.222409179Z" level=info msg="StartContainer for \"d50b7189ac4833058b61c1847c4fd9cfba1cacf50f39e54774203b21fc596e46\" returns successfully" Apr 17 23:40:14.773963 systemd[1]: cri-containerd-33b291eacbf952f10607991784188d4e011247c37b732911cf7e5d79d3cbcfc3.scope: Deactivated successfully. Apr 17 23:40:14.774356 systemd[1]: cri-containerd-33b291eacbf952f10607991784188d4e011247c37b732911cf7e5d79d3cbcfc3.scope: Consumed 1.873s CPU time, 16.3M memory peak, 0B memory swap peak. Apr 17 23:40:14.811111 containerd[1982]: time="2026-04-17T23:40:14.810795226Z" level=info msg="shim disconnected" id=33b291eacbf952f10607991784188d4e011247c37b732911cf7e5d79d3cbcfc3 namespace=k8s.io Apr 17 23:40:14.811111 containerd[1982]: time="2026-04-17T23:40:14.810882763Z" level=warning msg="cleaning up after shim disconnected" id=33b291eacbf952f10607991784188d4e011247c37b732911cf7e5d79d3cbcfc3 namespace=k8s.io Apr 17 23:40:14.811111 containerd[1982]: time="2026-04-17T23:40:14.810909580Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 17 23:40:14.811412 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33b291eacbf952f10607991784188d4e011247c37b732911cf7e5d79d3cbcfc3-rootfs.mount: Deactivated successfully. Apr 17 23:40:15.669111 kubelet[3213]: I0417 23:40:15.669079 3213 scope.go:117] "RemoveContainer" containerID="33b291eacbf952f10607991784188d4e011247c37b732911cf7e5d79d3cbcfc3" Apr 17 23:40:15.671657 containerd[1982]: time="2026-04-17T23:40:15.671617458Z" level=info msg="CreateContainer within sandbox \"7210b6e6e3fa14de2fd7649a8af04ee3725082ebb300d19b315d127011f9487a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Apr 17 23:40:15.695838 containerd[1982]: time="2026-04-17T23:40:15.695641868Z" level=info msg="CreateContainer within sandbox \"7210b6e6e3fa14de2fd7649a8af04ee3725082ebb300d19b315d127011f9487a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"f9f617bbb5c84b96824ef01326b6f21ca7b4f0e699c0d50a13f1891471b64c90\"" Apr 17 23:40:15.696881 containerd[1982]: time="2026-04-17T23:40:15.696313434Z" level=info msg="StartContainer for \"f9f617bbb5c84b96824ef01326b6f21ca7b4f0e699c0d50a13f1891471b64c90\"" Apr 17 23:40:15.750938 systemd[1]: Started cri-containerd-f9f617bbb5c84b96824ef01326b6f21ca7b4f0e699c0d50a13f1891471b64c90.scope - libcontainer container f9f617bbb5c84b96824ef01326b6f21ca7b4f0e699c0d50a13f1891471b64c90. Apr 17 23:40:15.802184 containerd[1982]: time="2026-04-17T23:40:15.802118417Z" level=info msg="StartContainer for \"f9f617bbb5c84b96824ef01326b6f21ca7b4f0e699c0d50a13f1891471b64c90\" returns successfully" Apr 17 23:40:15.814240 systemd[1]: run-containerd-runc-k8s.io-f9f617bbb5c84b96824ef01326b6f21ca7b4f0e699c0d50a13f1891471b64c90-runc.TmjDHB.mount: Deactivated successfully. Apr 17 23:40:21.032254 kubelet[3213]: E0417 23:40:21.028156 3213 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-109?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"