Apr 21 10:21:14.930818 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Apr 21 08:36:33 -00 2026 Apr 21 10:21:14.930855 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:21:14.930875 kernel: BIOS-provided physical RAM map: Apr 21 10:21:14.930887 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Apr 21 10:21:14.930897 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000786cdfff] usable Apr 21 10:21:14.930910 kernel: BIOS-e820: [mem 0x00000000786ce000-0x00000000787cdfff] type 20 Apr 21 10:21:14.930924 kernel: BIOS-e820: [mem 0x00000000787ce000-0x000000007894dfff] reserved Apr 21 10:21:14.930936 kernel: BIOS-e820: [mem 0x000000007894e000-0x000000007895dfff] ACPI data Apr 21 10:21:14.930946 kernel: BIOS-e820: [mem 0x000000007895e000-0x00000000789ddfff] ACPI NVS Apr 21 10:21:14.930960 kernel: BIOS-e820: [mem 0x00000000789de000-0x000000007c97bfff] usable Apr 21 10:21:14.930971 kernel: BIOS-e820: [mem 0x000000007c97c000-0x000000007c9fffff] reserved Apr 21 10:21:14.930982 kernel: NX (Execute Disable) protection: active Apr 21 10:21:14.930993 kernel: APIC: Static calls initialized Apr 21 10:21:14.931004 kernel: efi: EFI v2.7 by EDK II Apr 21 10:21:14.931019 kernel: efi: SMBIOS=0x7886a000 ACPI=0x7895d000 ACPI 2.0=0x7895d014 MEMATTR=0x7701a018 Apr 21 10:21:14.931034 kernel: SMBIOS 2.7 present. Apr 21 10:21:14.931047 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Apr 21 10:21:14.931059 kernel: Hypervisor detected: KVM Apr 21 10:21:14.931071 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 21 10:21:14.931083 kernel: kvm-clock: using sched offset of 3562019348 cycles Apr 21 10:21:14.931097 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 21 10:21:14.931110 kernel: tsc: Detected 2499.998 MHz processor Apr 21 10:21:14.931123 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 21 10:21:14.931136 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 21 10:21:14.931149 kernel: last_pfn = 0x7c97c max_arch_pfn = 0x400000000 Apr 21 10:21:14.931165 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Apr 21 10:21:14.931179 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 21 10:21:14.931192 kernel: Using GB pages for direct mapping Apr 21 10:21:14.931205 kernel: Secure boot disabled Apr 21 10:21:14.931218 kernel: ACPI: Early table checksum verification disabled Apr 21 10:21:14.931230 kernel: ACPI: RSDP 0x000000007895D014 000024 (v02 AMAZON) Apr 21 10:21:14.931243 kernel: ACPI: XSDT 0x000000007895C0E8 00006C (v01 AMAZON AMZNFACP 00000001 01000013) Apr 21 10:21:14.931255 kernel: ACPI: FACP 0x0000000078955000 000114 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Apr 21 10:21:14.931267 kernel: ACPI: DSDT 0x0000000078956000 00115A (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Apr 21 10:21:14.931283 kernel: ACPI: FACS 0x00000000789D0000 000040 Apr 21 10:21:14.931296 kernel: ACPI: WAET 0x000000007895B000 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Apr 21 10:21:14.931310 kernel: ACPI: SLIT 0x000000007895A000 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Apr 21 10:21:14.931323 kernel: ACPI: APIC 0x0000000078959000 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Apr 21 10:21:14.931337 kernel: ACPI: SRAT 0x0000000078958000 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Apr 21 10:21:14.931351 kernel: ACPI: HPET 0x0000000078954000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Apr 21 10:21:14.931369 kernel: ACPI: SSDT 0x0000000078953000 000759 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 21 10:21:14.931387 kernel: ACPI: SSDT 0x0000000078952000 0000D1 (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Apr 21 10:21:14.931402 kernel: ACPI: BGRT 0x0000000078951000 000038 (v01 AMAZON AMAZON 00000002 01000013) Apr 21 10:21:14.931416 kernel: ACPI: Reserving FACP table memory at [mem 0x78955000-0x78955113] Apr 21 10:21:14.931431 kernel: ACPI: Reserving DSDT table memory at [mem 0x78956000-0x78957159] Apr 21 10:21:14.931444 kernel: ACPI: Reserving FACS table memory at [mem 0x789d0000-0x789d003f] Apr 21 10:21:14.931456 kernel: ACPI: Reserving WAET table memory at [mem 0x7895b000-0x7895b027] Apr 21 10:21:14.931469 kernel: ACPI: Reserving SLIT table memory at [mem 0x7895a000-0x7895a06b] Apr 21 10:21:14.931483 kernel: ACPI: Reserving APIC table memory at [mem 0x78959000-0x78959075] Apr 21 10:21:14.931495 kernel: ACPI: Reserving SRAT table memory at [mem 0x78958000-0x7895809f] Apr 21 10:21:14.931508 kernel: ACPI: Reserving HPET table memory at [mem 0x78954000-0x78954037] Apr 21 10:21:14.931522 kernel: ACPI: Reserving SSDT table memory at [mem 0x78953000-0x78953758] Apr 21 10:21:14.931535 kernel: ACPI: Reserving SSDT table memory at [mem 0x78952000-0x789520d0] Apr 21 10:21:14.931548 kernel: ACPI: Reserving BGRT table memory at [mem 0x78951000-0x78951037] Apr 21 10:21:14.931561 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 21 10:21:14.931575 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 21 10:21:14.931589 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Apr 21 10:21:14.931606 kernel: NUMA: Initialized distance table, cnt=1 Apr 21 10:21:14.931619 kernel: NODE_DATA(0) allocated [mem 0x7a8f0000-0x7a8f5fff] Apr 21 10:21:14.931634 kernel: Zone ranges: Apr 21 10:21:14.931650 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 21 10:21:14.931666 kernel: DMA32 [mem 0x0000000001000000-0x000000007c97bfff] Apr 21 10:21:14.931681 kernel: Normal empty Apr 21 10:21:14.931697 kernel: Movable zone start for each node Apr 21 10:21:14.931722 kernel: Early memory node ranges Apr 21 10:21:14.931737 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Apr 21 10:21:14.931773 kernel: node 0: [mem 0x0000000000100000-0x00000000786cdfff] Apr 21 10:21:14.931786 kernel: node 0: [mem 0x00000000789de000-0x000000007c97bfff] Apr 21 10:21:14.931802 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007c97bfff] Apr 21 10:21:14.931823 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 21 10:21:14.931835 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Apr 21 10:21:14.931847 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Apr 21 10:21:14.931860 kernel: On node 0, zone DMA32: 13956 pages in unavailable ranges Apr 21 10:21:14.931872 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 21 10:21:14.931885 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 21 10:21:14.931899 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Apr 21 10:21:14.931916 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 21 10:21:14.931929 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 21 10:21:14.931940 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 21 10:21:14.931953 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 21 10:21:14.931969 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 21 10:21:14.931984 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Apr 21 10:21:14.931998 kernel: TSC deadline timer available Apr 21 10:21:14.932012 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 21 10:21:14.932028 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 21 10:21:14.932047 kernel: [mem 0x7ca00000-0xffffffff] available for PCI devices Apr 21 10:21:14.932064 kernel: Booting paravirtualized kernel on KVM Apr 21 10:21:14.932081 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 21 10:21:14.932097 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 21 10:21:14.932114 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 21 10:21:14.932131 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 21 10:21:14.932147 kernel: pcpu-alloc: [0] 0 1 Apr 21 10:21:14.932162 kernel: kvm-guest: PV spinlocks enabled Apr 21 10:21:14.932176 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 21 10:21:14.932198 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:21:14.932214 kernel: random: crng init done Apr 21 10:21:14.932229 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 21 10:21:14.932243 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 21 10:21:14.932255 kernel: Fallback order for Node 0: 0 Apr 21 10:21:14.932268 kernel: Built 1 zonelists, mobility grouping on. Total pages: 501318 Apr 21 10:21:14.932283 kernel: Policy zone: DMA32 Apr 21 10:21:14.932297 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 21 10:21:14.932314 kernel: Memory: 1874640K/2037804K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 162904K reserved, 0K cma-reserved) Apr 21 10:21:14.932661 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 21 10:21:14.932684 kernel: Kernel/User page tables isolation: enabled Apr 21 10:21:14.932698 kernel: ftrace: allocating 37996 entries in 149 pages Apr 21 10:21:14.932713 kernel: ftrace: allocated 149 pages with 4 groups Apr 21 10:21:14.932728 kernel: Dynamic Preempt: voluntary Apr 21 10:21:14.932742 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 21 10:21:14.932870 kernel: rcu: RCU event tracing is enabled. Apr 21 10:21:14.932885 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 21 10:21:14.932904 kernel: Trampoline variant of Tasks RCU enabled. Apr 21 10:21:14.932916 kernel: Rude variant of Tasks RCU enabled. Apr 21 10:21:14.932930 kernel: Tracing variant of Tasks RCU enabled. Apr 21 10:21:14.932944 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 21 10:21:14.932958 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 21 10:21:14.932972 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 21 10:21:14.932985 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 21 10:21:14.933013 kernel: Console: colour dummy device 80x25 Apr 21 10:21:14.933027 kernel: printk: console [tty0] enabled Apr 21 10:21:14.933041 kernel: printk: console [ttyS0] enabled Apr 21 10:21:14.933056 kernel: ACPI: Core revision 20230628 Apr 21 10:21:14.933070 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Apr 21 10:21:14.933087 kernel: APIC: Switch to symmetric I/O mode setup Apr 21 10:21:14.933102 kernel: x2apic enabled Apr 21 10:21:14.933116 kernel: APIC: Switched APIC routing to: physical x2apic Apr 21 10:21:14.933132 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Apr 21 10:21:14.933149 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Apr 21 10:21:14.933163 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Apr 21 10:21:14.933178 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Apr 21 10:21:14.933192 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 21 10:21:14.933207 kernel: Spectre V2 : Mitigation: Retpolines Apr 21 10:21:14.933221 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 21 10:21:14.933236 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Apr 21 10:21:14.933250 kernel: RETBleed: Vulnerable Apr 21 10:21:14.933264 kernel: Speculative Store Bypass: Vulnerable Apr 21 10:21:14.933278 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:21:14.933292 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 21 10:21:14.933309 kernel: GDS: Unknown: Dependent on hypervisor status Apr 21 10:21:14.933323 kernel: active return thunk: its_return_thunk Apr 21 10:21:14.933338 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 21 10:21:14.933352 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 21 10:21:14.933367 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 21 10:21:14.933382 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 21 10:21:14.933396 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Apr 21 10:21:14.933410 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Apr 21 10:21:14.933424 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Apr 21 10:21:14.933438 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Apr 21 10:21:14.933453 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Apr 21 10:21:14.933470 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Apr 21 10:21:14.933485 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 21 10:21:14.933499 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Apr 21 10:21:14.933513 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Apr 21 10:21:14.933527 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Apr 21 10:21:14.933541 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Apr 21 10:21:14.933555 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Apr 21 10:21:14.933570 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Apr 21 10:21:14.933584 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Apr 21 10:21:14.933598 kernel: Freeing SMP alternatives memory: 32K Apr 21 10:21:14.933613 kernel: pid_max: default: 32768 minimum: 301 Apr 21 10:21:14.933630 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 21 10:21:14.933645 kernel: landlock: Up and running. Apr 21 10:21:14.933659 kernel: SELinux: Initializing. Apr 21 10:21:14.933673 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 21 10:21:14.933687 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 21 10:21:14.933702 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Apr 21 10:21:14.933716 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:21:14.933732 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:21:14.933746 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 21 10:21:14.933774 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Apr 21 10:21:14.933791 kernel: signal: max sigframe size: 3632 Apr 21 10:21:14.933806 kernel: rcu: Hierarchical SRCU implementation. Apr 21 10:21:14.933821 kernel: rcu: Max phase no-delay instances is 400. Apr 21 10:21:14.933835 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 21 10:21:14.933849 kernel: smp: Bringing up secondary CPUs ... Apr 21 10:21:14.933863 kernel: smpboot: x86: Booting SMP configuration: Apr 21 10:21:14.933877 kernel: .... node #0, CPUs: #1 Apr 21 10:21:14.933892 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 21 10:21:14.933908 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 21 10:21:14.933925 kernel: smp: Brought up 1 node, 2 CPUs Apr 21 10:21:14.933939 kernel: smpboot: Max logical packages: 1 Apr 21 10:21:14.933953 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Apr 21 10:21:14.933969 kernel: devtmpfs: initialized Apr 21 10:21:14.933983 kernel: x86/mm: Memory block size: 128MB Apr 21 10:21:14.933998 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7895e000-0x789ddfff] (524288 bytes) Apr 21 10:21:14.934013 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 21 10:21:14.934027 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 21 10:21:14.934041 kernel: pinctrl core: initialized pinctrl subsystem Apr 21 10:21:14.934058 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 21 10:21:14.934073 kernel: audit: initializing netlink subsys (disabled) Apr 21 10:21:14.934087 kernel: audit: type=2000 audit(1776766875.120:1): state=initialized audit_enabled=0 res=1 Apr 21 10:21:14.934101 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 21 10:21:14.934115 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 21 10:21:14.934130 kernel: cpuidle: using governor menu Apr 21 10:21:14.934144 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 21 10:21:14.934158 kernel: dca service started, version 1.12.1 Apr 21 10:21:14.934172 kernel: PCI: Using configuration type 1 for base access Apr 21 10:21:14.934189 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 21 10:21:14.934204 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 21 10:21:14.934218 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 21 10:21:14.934233 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 21 10:21:14.934247 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 21 10:21:14.934262 kernel: ACPI: Added _OSI(Module Device) Apr 21 10:21:14.934276 kernel: ACPI: Added _OSI(Processor Device) Apr 21 10:21:14.934290 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 21 10:21:14.934305 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 21 10:21:14.934322 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 21 10:21:14.934336 kernel: ACPI: Interpreter enabled Apr 21 10:21:14.934350 kernel: ACPI: PM: (supports S0 S5) Apr 21 10:21:14.934364 kernel: ACPI: Using IOAPIC for interrupt routing Apr 21 10:21:14.934378 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 21 10:21:14.934392 kernel: PCI: Using E820 reservations for host bridge windows Apr 21 10:21:14.934406 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Apr 21 10:21:14.934420 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 21 10:21:14.936950 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 21 10:21:14.937121 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 21 10:21:14.937270 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 21 10:21:14.937287 kernel: acpiphp: Slot [3] registered Apr 21 10:21:14.937302 kernel: acpiphp: Slot [4] registered Apr 21 10:21:14.937317 kernel: acpiphp: Slot [5] registered Apr 21 10:21:14.937331 kernel: acpiphp: Slot [6] registered Apr 21 10:21:14.937346 kernel: acpiphp: Slot [7] registered Apr 21 10:21:14.937364 kernel: acpiphp: Slot [8] registered Apr 21 10:21:14.937379 kernel: acpiphp: Slot [9] registered Apr 21 10:21:14.937392 kernel: acpiphp: Slot [10] registered Apr 21 10:21:14.937407 kernel: acpiphp: Slot [11] registered Apr 21 10:21:14.937421 kernel: acpiphp: Slot [12] registered Apr 21 10:21:14.937436 kernel: acpiphp: Slot [13] registered Apr 21 10:21:14.937451 kernel: acpiphp: Slot [14] registered Apr 21 10:21:14.937464 kernel: acpiphp: Slot [15] registered Apr 21 10:21:14.937479 kernel: acpiphp: Slot [16] registered Apr 21 10:21:14.937493 kernel: acpiphp: Slot [17] registered Apr 21 10:21:14.937510 kernel: acpiphp: Slot [18] registered Apr 21 10:21:14.937524 kernel: acpiphp: Slot [19] registered Apr 21 10:21:14.937538 kernel: acpiphp: Slot [20] registered Apr 21 10:21:14.937552 kernel: acpiphp: Slot [21] registered Apr 21 10:21:14.937567 kernel: acpiphp: Slot [22] registered Apr 21 10:21:14.937580 kernel: acpiphp: Slot [23] registered Apr 21 10:21:14.937595 kernel: acpiphp: Slot [24] registered Apr 21 10:21:14.937608 kernel: acpiphp: Slot [25] registered Apr 21 10:21:14.937623 kernel: acpiphp: Slot [26] registered Apr 21 10:21:14.937639 kernel: acpiphp: Slot [27] registered Apr 21 10:21:14.937654 kernel: acpiphp: Slot [28] registered Apr 21 10:21:14.937668 kernel: acpiphp: Slot [29] registered Apr 21 10:21:14.937683 kernel: acpiphp: Slot [30] registered Apr 21 10:21:14.937698 kernel: acpiphp: Slot [31] registered Apr 21 10:21:14.937712 kernel: PCI host bridge to bus 0000:00 Apr 21 10:21:14.937869 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 21 10:21:14.937989 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 21 10:21:14.938116 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 21 10:21:14.938242 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Apr 21 10:21:14.938364 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x2000ffffffff window] Apr 21 10:21:14.938486 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 21 10:21:14.938649 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 21 10:21:14.939404 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Apr 21 10:21:14.939580 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Apr 21 10:21:14.939748 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 21 10:21:14.939947 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Apr 21 10:21:14.940078 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Apr 21 10:21:14.940209 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Apr 21 10:21:14.940356 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Apr 21 10:21:14.940488 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Apr 21 10:21:14.940619 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Apr 21 10:21:14.942821 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Apr 21 10:21:14.942998 kernel: pci 0000:00:03.0: reg 0x10: [mem 0x80000000-0x803fffff pref] Apr 21 10:21:14.943143 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Apr 21 10:21:14.943278 kernel: pci 0000:00:03.0: BAR 0: assigned to efifb Apr 21 10:21:14.943421 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 21 10:21:14.943584 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Apr 21 10:21:14.943831 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80404000-0x80407fff] Apr 21 10:21:14.944008 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Apr 21 10:21:14.944165 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80400000-0x80403fff] Apr 21 10:21:14.944189 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 21 10:21:14.944207 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 21 10:21:14.944225 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 21 10:21:14.944243 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 21 10:21:14.944260 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 21 10:21:14.944283 kernel: iommu: Default domain type: Translated Apr 21 10:21:14.944300 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 21 10:21:14.944317 kernel: efivars: Registered efivars operations Apr 21 10:21:14.944334 kernel: PCI: Using ACPI for IRQ routing Apr 21 10:21:14.944351 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 21 10:21:14.944368 kernel: e820: reserve RAM buffer [mem 0x786ce000-0x7bffffff] Apr 21 10:21:14.944385 kernel: e820: reserve RAM buffer [mem 0x7c97c000-0x7fffffff] Apr 21 10:21:14.944541 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Apr 21 10:21:14.944701 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Apr 21 10:21:14.944924 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 21 10:21:14.944949 kernel: vgaarb: loaded Apr 21 10:21:14.944966 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Apr 21 10:21:14.944982 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Apr 21 10:21:14.944998 kernel: clocksource: Switched to clocksource kvm-clock Apr 21 10:21:14.945013 kernel: VFS: Disk quotas dquot_6.6.0 Apr 21 10:21:14.945030 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 21 10:21:14.945045 kernel: pnp: PnP ACPI init Apr 21 10:21:14.945066 kernel: pnp: PnP ACPI: found 5 devices Apr 21 10:21:14.945082 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 21 10:21:14.945097 kernel: NET: Registered PF_INET protocol family Apr 21 10:21:14.945114 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 21 10:21:14.945132 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 21 10:21:14.945148 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 21 10:21:14.945165 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 21 10:21:14.945182 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 21 10:21:14.945199 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 21 10:21:14.945220 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 21 10:21:14.945237 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 21 10:21:14.945254 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 21 10:21:14.945271 kernel: NET: Registered PF_XDP protocol family Apr 21 10:21:14.945424 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 21 10:21:14.945566 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 21 10:21:14.945701 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 21 10:21:14.945887 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Apr 21 10:21:14.946025 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x2000ffffffff window] Apr 21 10:21:14.946191 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 21 10:21:14.946214 kernel: PCI: CLS 0 bytes, default 64 Apr 21 10:21:14.946232 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 21 10:21:14.946250 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Apr 21 10:21:14.946266 kernel: clocksource: Switched to clocksource tsc Apr 21 10:21:14.946284 kernel: Initialise system trusted keyrings Apr 21 10:21:14.946301 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 21 10:21:14.946318 kernel: Key type asymmetric registered Apr 21 10:21:14.946340 kernel: Asymmetric key parser 'x509' registered Apr 21 10:21:14.946356 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 21 10:21:14.946372 kernel: io scheduler mq-deadline registered Apr 21 10:21:14.946389 kernel: io scheduler kyber registered Apr 21 10:21:14.946406 kernel: io scheduler bfq registered Apr 21 10:21:14.946423 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 21 10:21:14.946440 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 21 10:21:14.946457 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 21 10:21:14.946475 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 21 10:21:14.946497 kernel: i8042: Warning: Keylock active Apr 21 10:21:14.946513 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 21 10:21:14.946530 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 21 10:21:14.946688 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 21 10:21:14.947892 kernel: rtc_cmos 00:00: registered as rtc0 Apr 21 10:21:14.948053 kernel: rtc_cmos 00:00: setting system clock to 2026-04-21T10:21:14 UTC (1776766874) Apr 21 10:21:14.948185 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 21 10:21:14.948205 kernel: intel_pstate: CPU model not supported Apr 21 10:21:14.948226 kernel: efifb: probing for efifb Apr 21 10:21:14.948241 kernel: efifb: framebuffer at 0x80000000, using 1920k, total 1920k Apr 21 10:21:14.948257 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 Apr 21 10:21:14.948272 kernel: efifb: scrolling: redraw Apr 21 10:21:14.948287 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Apr 21 10:21:14.948302 kernel: Console: switching to colour frame buffer device 100x37 Apr 21 10:21:14.948317 kernel: fb0: EFI VGA frame buffer device Apr 21 10:21:14.948331 kernel: pstore: Using crash dump compression: deflate Apr 21 10:21:14.948347 kernel: pstore: Registered efi_pstore as persistent store backend Apr 21 10:21:14.948365 kernel: NET: Registered PF_INET6 protocol family Apr 21 10:21:14.948380 kernel: Segment Routing with IPv6 Apr 21 10:21:14.948396 kernel: In-situ OAM (IOAM) with IPv6 Apr 21 10:21:14.948411 kernel: NET: Registered PF_PACKET protocol family Apr 21 10:21:14.948426 kernel: Key type dns_resolver registered Apr 21 10:21:14.948441 kernel: IPI shorthand broadcast: enabled Apr 21 10:21:14.948483 kernel: sched_clock: Marking stable (485001979, 145431247)->(703496714, -73063488) Apr 21 10:21:14.948501 kernel: registered taskstats version 1 Apr 21 10:21:14.948518 kernel: Loading compiled-in X.509 certificates Apr 21 10:21:14.948538 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: c59d945e31647ab89a50a01beeb265fbb707808b' Apr 21 10:21:14.948553 kernel: Key type .fscrypt registered Apr 21 10:21:14.948569 kernel: Key type fscrypt-provisioning registered Apr 21 10:21:14.948585 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 21 10:21:14.948605 kernel: ima: Allocated hash algorithm: sha1 Apr 21 10:21:14.948621 kernel: ima: No architecture policies found Apr 21 10:21:14.948637 kernel: clk: Disabling unused clocks Apr 21 10:21:14.948653 kernel: Freeing unused kernel image (initmem) memory: 42892K Apr 21 10:21:14.948670 kernel: Write protecting the kernel read-only data: 36864k Apr 21 10:21:14.948690 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 21 10:21:14.948706 kernel: Run /init as init process Apr 21 10:21:14.948723 kernel: with arguments: Apr 21 10:21:14.948739 kernel: /init Apr 21 10:21:14.948823 kernel: with environment: Apr 21 10:21:14.948837 kernel: HOME=/ Apr 21 10:21:14.948851 kernel: TERM=linux Apr 21 10:21:14.948869 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:21:14.948896 systemd[1]: Detected virtualization amazon. Apr 21 10:21:14.948911 systemd[1]: Detected architecture x86-64. Apr 21 10:21:14.948926 systemd[1]: Running in initrd. Apr 21 10:21:14.948941 systemd[1]: No hostname configured, using default hostname. Apr 21 10:21:14.948955 systemd[1]: Hostname set to . Apr 21 10:21:14.948971 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:21:14.948986 systemd[1]: Queued start job for default target initrd.target. Apr 21 10:21:14.949000 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:21:14.949018 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:21:14.949034 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 21 10:21:14.949050 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:21:14.949066 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 21 10:21:14.949087 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 21 10:21:14.949109 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 21 10:21:14.949126 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 21 10:21:14.949143 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:21:14.949160 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:21:14.949177 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:21:14.949194 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:21:14.949211 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:21:14.949232 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:21:14.949250 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:21:14.949269 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:21:14.949288 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 21 10:21:14.949307 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 21 10:21:14.949328 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:21:14.949348 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:21:14.949367 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:21:14.949392 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:21:14.949416 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 21 10:21:14.949435 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:21:14.949453 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 21 10:21:14.949473 systemd[1]: Starting systemd-fsck-usr.service... Apr 21 10:21:14.949492 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:21:14.949511 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:21:14.949569 systemd-journald[179]: Collecting audit messages is disabled. Apr 21 10:21:14.949615 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:21:14.949632 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 21 10:21:14.949650 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:21:14.949667 systemd[1]: Finished systemd-fsck-usr.service. Apr 21 10:21:14.949689 systemd-journald[179]: Journal started Apr 21 10:21:14.949724 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2644f5e0b016e95f5b80b907f8210e) is 4.7M, max 38.2M, 33.4M free. Apr 21 10:21:14.961700 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:21:14.967201 systemd-modules-load[180]: Inserted module 'overlay' Apr 21 10:21:14.974194 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:21:14.975576 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:21:14.978086 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:21:14.989967 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:21:14.994956 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:21:15.001997 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:21:15.008982 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 21 10:21:15.020401 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 21 10:21:15.022991 kernel: Bridge firewalling registered Apr 21 10:21:15.023043 systemd-modules-load[180]: Inserted module 'br_netfilter' Apr 21 10:21:15.024219 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:21:15.037938 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:21:15.040000 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:21:15.042116 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:21:15.044081 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:21:15.052081 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 21 10:21:15.053202 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:21:15.064050 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:21:15.075544 dracut-cmdline[211]: dracut-dracut-053 Apr 21 10:21:15.080336 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8954524425723bfa042c04f94c1e1c390b7f44ef08e5f6b6ea2dffa22a37ca9a Apr 21 10:21:15.110232 systemd-resolved[214]: Positive Trust Anchors: Apr 21 10:21:15.110249 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:21:15.110312 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:21:15.118611 systemd-resolved[214]: Defaulting to hostname 'linux'. Apr 21 10:21:15.121985 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:21:15.123487 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:21:15.169798 kernel: SCSI subsystem initialized Apr 21 10:21:15.179865 kernel: Loading iSCSI transport class v2.0-870. Apr 21 10:21:15.191867 kernel: iscsi: registered transport (tcp) Apr 21 10:21:15.213251 kernel: iscsi: registered transport (qla4xxx) Apr 21 10:21:15.213334 kernel: QLogic iSCSI HBA Driver Apr 21 10:21:15.253097 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 21 10:21:15.257995 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 21 10:21:15.284971 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 21 10:21:15.285051 kernel: device-mapper: uevent: version 1.0.3 Apr 21 10:21:15.287844 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 21 10:21:15.328807 kernel: raid6: avx512x4 gen() 18062 MB/s Apr 21 10:21:15.346784 kernel: raid6: avx512x2 gen() 17808 MB/s Apr 21 10:21:15.364788 kernel: raid6: avx512x1 gen() 18109 MB/s Apr 21 10:21:15.382783 kernel: raid6: avx2x4 gen() 18137 MB/s Apr 21 10:21:15.400785 kernel: raid6: avx2x2 gen() 18120 MB/s Apr 21 10:21:15.419331 kernel: raid6: avx2x1 gen() 13716 MB/s Apr 21 10:21:15.419390 kernel: raid6: using algorithm avx2x4 gen() 18137 MB/s Apr 21 10:21:15.438339 kernel: raid6: .... xor() 7350 MB/s, rmw enabled Apr 21 10:21:15.438400 kernel: raid6: using avx512x2 recovery algorithm Apr 21 10:21:15.460794 kernel: xor: automatically using best checksumming function avx Apr 21 10:21:15.620790 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 21 10:21:15.631529 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:21:15.636976 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:21:15.663419 systemd-udevd[397]: Using default interface naming scheme 'v255'. Apr 21 10:21:15.668666 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:21:15.678008 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 21 10:21:15.699135 dracut-pre-trigger[402]: rd.md=0: removing MD RAID activation Apr 21 10:21:15.730821 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:21:15.736994 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:21:15.789981 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:21:15.801069 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 21 10:21:15.829186 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 21 10:21:15.832391 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:21:15.834367 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:21:15.835872 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:21:15.843030 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 21 10:21:15.876917 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:21:15.898649 kernel: ena 0000:00:05.0: ENA device version: 0.10 Apr 21 10:21:15.898944 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Apr 21 10:21:15.903787 kernel: cryptd: max_cpu_qlen set to 1000 Apr 21 10:21:15.909796 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Apr 21 10:21:15.916024 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:21:15.917049 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:21:15.921046 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:21:15.922926 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:21:15.931541 kernel: AVX2 version of gcm_enc/dec engaged. Apr 21 10:21:15.931580 kernel: AES CTR mode by8 optimization enabled Apr 21 10:21:15.923156 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:21:15.930244 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:21:15.940946 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80400000, mac addr 06:96:fb:48:8e:ff Apr 21 10:21:15.940636 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:21:15.951693 (udev-worker)[452]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:21:15.962344 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:21:15.963196 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:21:15.972790 kernel: nvme nvme0: pci function 0000:00:04.0 Apr 21 10:21:15.973068 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Apr 21 10:21:15.976061 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:21:15.994177 kernel: nvme nvme0: 2/0/0 default/read/poll queues Apr 21 10:21:15.996234 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:21:16.003127 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 21 10:21:16.003198 kernel: GPT:9289727 != 33554431 Apr 21 10:21:16.003989 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 21 10:21:16.015107 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 21 10:21:16.015141 kernel: GPT:9289727 != 33554431 Apr 21 10:21:16.015162 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 21 10:21:16.015182 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:21:16.031630 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:21:16.078384 kernel: BTRFS: device fsid 4627a20b-c3ad-458e-a05a-90623574a539 devid 1 transid 31 /dev/nvme0n1p3 scanned by (udev-worker) (455) Apr 21 10:21:16.082823 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (444) Apr 21 10:21:16.125501 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Apr 21 10:21:16.168622 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Apr 21 10:21:16.169224 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Apr 21 10:21:16.176583 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 21 10:21:16.183187 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Apr 21 10:21:16.189984 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 21 10:21:16.197074 disk-uuid[629]: Primary Header is updated. Apr 21 10:21:16.197074 disk-uuid[629]: Secondary Entries is updated. Apr 21 10:21:16.197074 disk-uuid[629]: Secondary Header is updated. Apr 21 10:21:16.202797 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:21:16.210903 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:21:16.216815 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:21:17.219806 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Apr 21 10:21:17.220359 disk-uuid[630]: The operation has completed successfully. Apr 21 10:21:17.366954 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 21 10:21:17.367082 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 21 10:21:17.384075 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 21 10:21:17.388588 sh[971]: Success Apr 21 10:21:17.404304 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 21 10:21:17.512113 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 21 10:21:17.520295 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 21 10:21:17.525183 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 21 10:21:17.566954 kernel: BTRFS info (device dm-0): first mount of filesystem 4627a20b-c3ad-458e-a05a-90623574a539 Apr 21 10:21:17.567021 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:21:17.569049 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 21 10:21:17.570977 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 21 10:21:17.573439 kernel: BTRFS info (device dm-0): using free space tree Apr 21 10:21:17.589793 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 21 10:21:17.594080 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 21 10:21:17.595351 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 21 10:21:17.606986 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 21 10:21:17.609048 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 21 10:21:17.639852 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:21:17.644315 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:21:17.644383 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 21 10:21:17.652783 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 21 10:21:17.664698 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 21 10:21:17.667784 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:21:17.674482 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 21 10:21:17.681996 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 21 10:21:17.741036 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:21:17.746277 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:21:17.810521 systemd-networkd[1163]: lo: Link UP Apr 21 10:21:17.810820 systemd-networkd[1163]: lo: Gained carrier Apr 21 10:21:17.814422 systemd-networkd[1163]: Enumeration completed Apr 21 10:21:17.814977 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:21:17.819225 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:21:17.819229 systemd-networkd[1163]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:21:17.823273 systemd[1]: Reached target network.target - Network. Apr 21 10:21:17.827122 systemd-networkd[1163]: eth0: Link UP Apr 21 10:21:17.827132 systemd-networkd[1163]: eth0: Gained carrier Apr 21 10:21:17.827148 systemd-networkd[1163]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:21:17.839927 systemd-networkd[1163]: eth0: DHCPv4 address 172.31.24.37/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 21 10:21:17.841998 ignition[1104]: Ignition 2.19.0 Apr 21 10:21:17.842012 ignition[1104]: Stage: fetch-offline Apr 21 10:21:17.842272 ignition[1104]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:17.844239 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:21:17.842287 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:21:17.842620 ignition[1104]: Ignition finished successfully Apr 21 10:21:17.851945 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 21 10:21:17.866161 ignition[1173]: Ignition 2.19.0 Apr 21 10:21:17.866171 ignition[1173]: Stage: fetch Apr 21 10:21:17.866517 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:17.866526 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:21:17.866611 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:21:17.893204 ignition[1173]: PUT result: OK Apr 21 10:21:17.898202 ignition[1173]: parsed url from cmdline: "" Apr 21 10:21:17.898213 ignition[1173]: no config URL provided Apr 21 10:21:17.898224 ignition[1173]: reading system config file "/usr/lib/ignition/user.ign" Apr 21 10:21:17.898240 ignition[1173]: no config at "/usr/lib/ignition/user.ign" Apr 21 10:21:17.898265 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:21:17.910550 ignition[1173]: PUT result: OK Apr 21 10:21:17.910668 ignition[1173]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Apr 21 10:21:17.911368 ignition[1173]: GET result: OK Apr 21 10:21:17.911458 ignition[1173]: parsing config with SHA512: 9c7ad3ab27e1fd9ac1c1f11b3e0f58db54aea15de91821a875751259bc7f2b7012ddcdd56654b7494414b10b0d390e160b192c396a14187f245ab10a5604b98e Apr 21 10:21:17.916132 unknown[1173]: fetched base config from "system" Apr 21 10:21:17.916150 unknown[1173]: fetched base config from "system" Apr 21 10:21:17.916162 unknown[1173]: fetched user config from "aws" Apr 21 10:21:17.917305 ignition[1173]: fetch: fetch complete Apr 21 10:21:17.917313 ignition[1173]: fetch: fetch passed Apr 21 10:21:17.917375 ignition[1173]: Ignition finished successfully Apr 21 10:21:17.919539 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 21 10:21:17.930058 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 21 10:21:17.946357 ignition[1179]: Ignition 2.19.0 Apr 21 10:21:17.946369 ignition[1179]: Stage: kargs Apr 21 10:21:17.946939 ignition[1179]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:17.946954 ignition[1179]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:21:17.947075 ignition[1179]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:21:17.948134 ignition[1179]: PUT result: OK Apr 21 10:21:17.951648 ignition[1179]: kargs: kargs passed Apr 21 10:21:17.951827 ignition[1179]: Ignition finished successfully Apr 21 10:21:17.953792 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 21 10:21:17.957952 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 21 10:21:17.982972 ignition[1185]: Ignition 2.19.0 Apr 21 10:21:17.982986 ignition[1185]: Stage: disks Apr 21 10:21:17.983463 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:17.983477 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:21:17.983599 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:21:17.985666 ignition[1185]: PUT result: OK Apr 21 10:21:17.989206 ignition[1185]: disks: disks passed Apr 21 10:21:17.989284 ignition[1185]: Ignition finished successfully Apr 21 10:21:17.991084 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 21 10:21:17.991925 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 21 10:21:17.992334 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 21 10:21:17.992914 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:21:17.993480 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:21:17.994090 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:21:17.997959 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 21 10:21:18.030388 systemd-fsck[1193]: ROOT: clean, 14/553520 files, 52654/553472 blocks Apr 21 10:21:18.034647 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 21 10:21:18.039926 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 21 10:21:18.148781 kernel: EXT4-fs (nvme0n1p9): mounted filesystem fd5e5f40-ad85-46ea-abb5-3cc3d4cd8af5 r/w with ordered data mode. Quota mode: none. Apr 21 10:21:18.149188 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 21 10:21:18.150339 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 21 10:21:18.157886 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:21:18.161103 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 21 10:21:18.163021 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 21 10:21:18.163097 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 21 10:21:18.163133 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:21:18.171246 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 21 10:21:18.177968 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 21 10:21:18.187896 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1212) Apr 21 10:21:18.196106 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:21:18.196189 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:21:18.196210 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 21 10:21:18.209892 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 21 10:21:18.211388 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:21:18.266930 initrd-setup-root[1238]: cut: /sysroot/etc/passwd: No such file or directory Apr 21 10:21:18.273792 initrd-setup-root[1245]: cut: /sysroot/etc/group: No such file or directory Apr 21 10:21:18.279054 initrd-setup-root[1252]: cut: /sysroot/etc/shadow: No such file or directory Apr 21 10:21:18.284784 initrd-setup-root[1259]: cut: /sysroot/etc/gshadow: No such file or directory Apr 21 10:21:18.391497 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 21 10:21:18.395917 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 21 10:21:18.404151 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 21 10:21:18.416070 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:21:18.444238 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 21 10:21:18.447455 ignition[1326]: INFO : Ignition 2.19.0 Apr 21 10:21:18.449156 ignition[1326]: INFO : Stage: mount Apr 21 10:21:18.449156 ignition[1326]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:18.449156 ignition[1326]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:21:18.449156 ignition[1326]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:21:18.450706 ignition[1326]: INFO : PUT result: OK Apr 21 10:21:18.454686 ignition[1326]: INFO : mount: mount passed Apr 21 10:21:18.455352 ignition[1326]: INFO : Ignition finished successfully Apr 21 10:21:18.456539 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 21 10:21:18.462014 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 21 10:21:18.564254 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 21 10:21:18.570028 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 21 10:21:18.587989 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1338) Apr 21 10:21:18.588090 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 855d7a31-c001-47db-a073-492800715453 Apr 21 10:21:18.592260 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Apr 21 10:21:18.592331 kernel: BTRFS info (device nvme0n1p6): using free space tree Apr 21 10:21:18.599968 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Apr 21 10:21:18.601921 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 21 10:21:18.624033 ignition[1355]: INFO : Ignition 2.19.0 Apr 21 10:21:18.624033 ignition[1355]: INFO : Stage: files Apr 21 10:21:18.625431 ignition[1355]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:18.625431 ignition[1355]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:21:18.625431 ignition[1355]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:21:18.627314 ignition[1355]: INFO : PUT result: OK Apr 21 10:21:18.632144 ignition[1355]: DEBUG : files: compiled without relabeling support, skipping Apr 21 10:21:18.633006 ignition[1355]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 21 10:21:18.633006 ignition[1355]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 21 10:21:18.638536 ignition[1355]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 21 10:21:18.639434 ignition[1355]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 21 10:21:18.639434 ignition[1355]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 21 10:21:18.639059 unknown[1355]: wrote ssh authorized keys file for user: core Apr 21 10:21:18.641996 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:21:18.641996 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 21 10:21:18.743972 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 21 10:21:18.956494 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 21 10:21:18.958372 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 21 10:21:18.958372 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 21 10:21:18.958372 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:21:18.958372 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 21 10:21:18.958372 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:21:18.958372 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 21 10:21:18.958372 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:21:18.958372 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 21 10:21:18.958372 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:21:18.958372 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 21 10:21:18.958372 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:21:18.958372 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:21:18.958372 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:21:18.958372 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 21 10:21:19.028274 systemd-networkd[1163]: eth0: Gained IPv6LL Apr 21 10:21:19.287909 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 21 10:21:20.079835 ignition[1355]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 21 10:21:20.079835 ignition[1355]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 21 10:21:20.082434 ignition[1355]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:21:20.082434 ignition[1355]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 21 10:21:20.082434 ignition[1355]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 21 10:21:20.082434 ignition[1355]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 21 10:21:20.082434 ignition[1355]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 21 10:21:20.082434 ignition[1355]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:21:20.082434 ignition[1355]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 21 10:21:20.082434 ignition[1355]: INFO : files: files passed Apr 21 10:21:20.082434 ignition[1355]: INFO : Ignition finished successfully Apr 21 10:21:20.083859 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 21 10:21:20.092040 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 21 10:21:20.096014 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 21 10:21:20.099525 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 21 10:21:20.099671 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 21 10:21:20.125580 initrd-setup-root-after-ignition[1383]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:21:20.125580 initrd-setup-root-after-ignition[1383]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:21:20.128655 initrd-setup-root-after-ignition[1387]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 21 10:21:20.130698 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:21:20.131372 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 21 10:21:20.135982 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 21 10:21:20.162653 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 21 10:21:20.162843 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 21 10:21:20.164167 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 21 10:21:20.165314 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 21 10:21:20.166129 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 21 10:21:20.170971 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 21 10:21:20.186269 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:21:20.192954 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 21 10:21:20.204589 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:21:20.205366 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:21:20.206396 systemd[1]: Stopped target timers.target - Timer Units. Apr 21 10:21:20.207263 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 21 10:21:20.207446 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 21 10:21:20.208772 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 21 10:21:20.209628 systemd[1]: Stopped target basic.target - Basic System. Apr 21 10:21:20.210436 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 21 10:21:20.211227 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 21 10:21:20.212160 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 21 10:21:20.212941 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 21 10:21:20.213708 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 21 10:21:20.214520 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 21 10:21:20.215747 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 21 10:21:20.216549 systemd[1]: Stopped target swap.target - Swaps. Apr 21 10:21:20.217276 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 21 10:21:20.217456 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 21 10:21:20.218556 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:21:20.219366 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:21:20.220172 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 21 10:21:20.220314 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:21:20.220983 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 21 10:21:20.221154 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 21 10:21:20.222498 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 21 10:21:20.222680 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 21 10:21:20.223407 systemd[1]: ignition-files.service: Deactivated successfully. Apr 21 10:21:20.223557 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 21 10:21:20.231090 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 21 10:21:20.235075 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 21 10:21:20.236511 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 21 10:21:20.236737 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:21:20.239733 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 21 10:21:20.239934 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 21 10:21:20.250871 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 21 10:21:20.252702 ignition[1407]: INFO : Ignition 2.19.0 Apr 21 10:21:20.252702 ignition[1407]: INFO : Stage: umount Apr 21 10:21:20.257881 ignition[1407]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 21 10:21:20.257881 ignition[1407]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Apr 21 10:21:20.257881 ignition[1407]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Apr 21 10:21:20.257881 ignition[1407]: INFO : PUT result: OK Apr 21 10:21:20.255236 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 21 10:21:20.264472 ignition[1407]: INFO : umount: umount passed Apr 21 10:21:20.265248 ignition[1407]: INFO : Ignition finished successfully Apr 21 10:21:20.267141 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 21 10:21:20.267309 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 21 10:21:20.269486 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 21 10:21:20.269559 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 21 10:21:20.270653 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 21 10:21:20.270722 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 21 10:21:20.271919 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 21 10:21:20.271979 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 21 10:21:20.273078 systemd[1]: Stopped target network.target - Network. Apr 21 10:21:20.274125 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 21 10:21:20.274197 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 21 10:21:20.275281 systemd[1]: Stopped target paths.target - Path Units. Apr 21 10:21:20.276336 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 21 10:21:20.280115 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:21:20.280528 systemd[1]: Stopped target slices.target - Slice Units. Apr 21 10:21:20.280867 systemd[1]: Stopped target sockets.target - Socket Units. Apr 21 10:21:20.281270 systemd[1]: iscsid.socket: Deactivated successfully. Apr 21 10:21:20.281330 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 21 10:21:20.281668 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 21 10:21:20.281705 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 21 10:21:20.282907 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 21 10:21:20.282978 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 21 10:21:20.283620 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 21 10:21:20.283977 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 21 10:21:20.284743 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 21 10:21:20.285507 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 21 10:21:20.287301 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 21 10:21:20.289833 systemd-networkd[1163]: eth0: DHCPv6 lease lost Apr 21 10:21:20.292033 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 21 10:21:20.292184 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 21 10:21:20.293584 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 21 10:21:20.293669 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:21:20.297887 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 21 10:21:20.298401 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 21 10:21:20.298479 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 21 10:21:20.299417 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:21:20.303209 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 21 10:21:20.303822 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 21 10:21:20.315321 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 21 10:21:20.316135 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:21:20.325400 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 21 10:21:20.325486 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 21 10:21:20.326931 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 21 10:21:20.326991 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:21:20.327550 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 21 10:21:20.327621 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 21 10:21:20.329516 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 21 10:21:20.329568 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 21 10:21:20.330496 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 21 10:21:20.330563 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 21 10:21:20.341028 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 21 10:21:20.343926 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 21 10:21:20.344023 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:21:20.345135 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 21 10:21:20.345198 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 21 10:21:20.345908 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 21 10:21:20.345974 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:21:20.346574 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 21 10:21:20.346628 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:21:20.349003 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 21 10:21:20.349078 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:21:20.351868 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 21 10:21:20.351936 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:21:20.352603 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 21 10:21:20.352664 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:21:20.354193 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 21 10:21:20.354320 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 21 10:21:20.355187 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 21 10:21:20.355305 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 21 10:21:20.421369 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 21 10:21:20.421522 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 21 10:21:20.423243 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 21 10:21:20.424173 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 21 10:21:20.424269 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 21 10:21:20.430963 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 21 10:21:20.439563 systemd[1]: Switching root. Apr 21 10:21:20.473459 systemd-journald[179]: Journal stopped Apr 21 10:21:21.854311 systemd-journald[179]: Received SIGTERM from PID 1 (systemd). Apr 21 10:21:21.854424 kernel: SELinux: policy capability network_peer_controls=1 Apr 21 10:21:21.854450 kernel: SELinux: policy capability open_perms=1 Apr 21 10:21:21.854471 kernel: SELinux: policy capability extended_socket_class=1 Apr 21 10:21:21.854497 kernel: SELinux: policy capability always_check_network=0 Apr 21 10:21:21.854517 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 21 10:21:21.854538 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 21 10:21:21.854558 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 21 10:21:21.854578 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 21 10:21:21.854599 kernel: audit: type=1403 audit(1776766880.669:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 21 10:21:21.854633 systemd[1]: Successfully loaded SELinux policy in 42.172ms. Apr 21 10:21:21.854663 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.056ms. Apr 21 10:21:21.854688 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 21 10:21:21.854714 systemd[1]: Detected virtualization amazon. Apr 21 10:21:21.854737 systemd[1]: Detected architecture x86-64. Apr 21 10:21:21.856874 systemd[1]: Detected first boot. Apr 21 10:21:21.856914 systemd[1]: Initializing machine ID from VM UUID. Apr 21 10:21:21.856938 zram_generator::config[1449]: No configuration found. Apr 21 10:21:21.856962 systemd[1]: Populated /etc with preset unit settings. Apr 21 10:21:21.856982 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 21 10:21:21.857002 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 21 10:21:21.857030 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 21 10:21:21.857052 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 21 10:21:21.857072 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 21 10:21:21.857093 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 21 10:21:21.857120 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 21 10:21:21.857140 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 21 10:21:21.857160 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 21 10:21:21.857182 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 21 10:21:21.857201 systemd[1]: Created slice user.slice - User and Session Slice. Apr 21 10:21:21.857226 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 21 10:21:21.857253 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 21 10:21:21.857273 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 21 10:21:21.857293 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 21 10:21:21.857319 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 21 10:21:21.857338 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 21 10:21:21.857357 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 21 10:21:21.857378 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 21 10:21:21.857399 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 21 10:21:21.857425 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 21 10:21:21.857446 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 21 10:21:21.857466 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 21 10:21:21.857488 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 21 10:21:21.857509 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 21 10:21:21.857529 systemd[1]: Reached target slices.target - Slice Units. Apr 21 10:21:21.857551 systemd[1]: Reached target swap.target - Swaps. Apr 21 10:21:21.857576 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 21 10:21:21.857598 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 21 10:21:21.857621 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 21 10:21:21.857643 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 21 10:21:21.857667 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 21 10:21:21.857692 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 21 10:21:21.857713 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 21 10:21:21.857734 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 21 10:21:21.859780 systemd[1]: Mounting media.mount - External Media Directory... Apr 21 10:21:21.859916 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:21:21.859942 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 21 10:21:21.859966 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 21 10:21:21.859989 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 21 10:21:21.860012 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 21 10:21:21.860035 systemd[1]: Reached target machines.target - Containers. Apr 21 10:21:21.860056 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 21 10:21:21.860077 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:21:21.860099 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 21 10:21:21.860125 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 21 10:21:21.860161 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:21:21.860184 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:21:21.860206 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:21:21.860228 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 21 10:21:21.860250 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:21:21.860273 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 21 10:21:21.860297 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 21 10:21:21.860331 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 21 10:21:21.860352 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 21 10:21:21.860374 systemd[1]: Stopped systemd-fsck-usr.service. Apr 21 10:21:21.860396 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 21 10:21:21.860418 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 21 10:21:21.860440 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 21 10:21:21.860461 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 21 10:21:21.860482 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 21 10:21:21.860504 systemd[1]: verity-setup.service: Deactivated successfully. Apr 21 10:21:21.860528 systemd[1]: Stopped verity-setup.service. Apr 21 10:21:21.860551 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:21:21.860572 kernel: fuse: init (API version 7.39) Apr 21 10:21:21.860594 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 21 10:21:21.860616 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 21 10:21:21.860638 systemd[1]: Mounted media.mount - External Media Directory. Apr 21 10:21:21.860658 kernel: loop: module loaded Apr 21 10:21:21.860679 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 21 10:21:21.860701 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 21 10:21:21.860725 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 21 10:21:21.860747 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 21 10:21:21.861967 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 21 10:21:21.862001 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 21 10:21:21.862026 kernel: ACPI: bus type drm_connector registered Apr 21 10:21:21.862058 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:21:21.862082 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:21:21.862106 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:21:21.862134 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:21:21.862158 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:21:21.862185 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:21:21.862210 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 21 10:21:21.862233 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 21 10:21:21.862258 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:21:21.862282 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:21:21.862305 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 21 10:21:21.862329 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 21 10:21:21.862393 systemd-journald[1531]: Collecting audit messages is disabled. Apr 21 10:21:21.862439 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 21 10:21:21.862459 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 21 10:21:21.862481 systemd-journald[1531]: Journal started Apr 21 10:21:21.862522 systemd-journald[1531]: Runtime Journal (/run/log/journal/ec2644f5e0b016e95f5b80b907f8210e) is 4.7M, max 38.2M, 33.4M free. Apr 21 10:21:21.865644 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 21 10:21:21.423086 systemd[1]: Queued start job for default target multi-user.target. Apr 21 10:21:21.443320 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Apr 21 10:21:21.443953 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 21 10:21:21.891498 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 21 10:21:21.891612 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 21 10:21:21.893783 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 21 10:21:21.899786 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 21 10:21:21.911191 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 21 10:21:21.922821 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 21 10:21:21.927790 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:21:21.939787 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 21 10:21:21.939881 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:21:21.953820 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 21 10:21:21.957781 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:21:21.965694 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 21 10:21:21.978277 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 21 10:21:21.989835 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 21 10:21:21.998897 systemd[1]: Started systemd-journald.service - Journal Service. Apr 21 10:21:22.002731 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 21 10:21:22.004568 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 21 10:21:22.007737 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 21 10:21:22.009781 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 21 10:21:22.014727 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 21 10:21:22.018400 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 21 10:21:22.046746 kernel: loop0: detected capacity change from 0 to 140768 Apr 21 10:21:22.062363 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 21 10:21:22.091175 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 21 10:21:22.102023 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 21 10:21:22.115844 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 21 10:21:22.121191 systemd-tmpfiles[1563]: ACLs are not supported, ignoring. Apr 21 10:21:22.126392 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 21 10:21:22.121418 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 21 10:21:22.124842 systemd-tmpfiles[1563]: ACLs are not supported, ignoring. Apr 21 10:21:22.144536 systemd-journald[1531]: Time spent on flushing to /var/log/journal/ec2644f5e0b016e95f5b80b907f8210e is 79.847ms for 994 entries. Apr 21 10:21:22.144536 systemd-journald[1531]: System Journal (/var/log/journal/ec2644f5e0b016e95f5b80b907f8210e) is 8.0M, max 195.6M, 187.6M free. Apr 21 10:21:22.257585 systemd-journald[1531]: Received client request to flush runtime journal. Apr 21 10:21:22.257667 kernel: loop1: detected capacity change from 0 to 142488 Apr 21 10:21:22.171150 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 21 10:21:22.267936 kernel: loop2: detected capacity change from 0 to 61336 Apr 21 10:21:22.182111 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 21 10:21:22.197894 udevadm[1594]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 21 10:21:22.261556 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 21 10:21:22.265950 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 21 10:21:22.267354 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 21 10:21:22.302290 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 21 10:21:22.312001 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 21 10:21:22.384088 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Apr 21 10:21:22.384515 systemd-tmpfiles[1603]: ACLs are not supported, ignoring. Apr 21 10:21:22.392139 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 21 10:21:22.397790 kernel: loop3: detected capacity change from 0 to 228704 Apr 21 10:21:22.551171 kernel: loop4: detected capacity change from 0 to 140768 Apr 21 10:21:22.601843 kernel: loop5: detected capacity change from 0 to 142488 Apr 21 10:21:22.651626 kernel: loop6: detected capacity change from 0 to 61336 Apr 21 10:21:22.679117 kernel: loop7: detected capacity change from 0 to 228704 Apr 21 10:21:22.730548 (sd-merge)[1608]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Apr 21 10:21:22.732564 (sd-merge)[1608]: Merged extensions into '/usr'. Apr 21 10:21:22.753962 systemd[1]: Reloading requested from client PID 1562 ('systemd-sysext') (unit systemd-sysext.service)... Apr 21 10:21:22.753988 systemd[1]: Reloading... Apr 21 10:21:22.941864 zram_generator::config[1637]: No configuration found. Apr 21 10:21:22.971598 ldconfig[1558]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 21 10:21:23.131197 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:21:23.186519 systemd[1]: Reloading finished in 431 ms. Apr 21 10:21:23.218651 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 21 10:21:23.219514 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 21 10:21:23.220347 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 21 10:21:23.230144 systemd[1]: Starting ensure-sysext.service... Apr 21 10:21:23.233977 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 21 10:21:23.243993 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 21 10:21:23.250471 systemd[1]: Reloading requested from client PID 1687 ('systemctl') (unit ensure-sysext.service)... Apr 21 10:21:23.250492 systemd[1]: Reloading... Apr 21 10:21:23.286915 systemd-udevd[1689]: Using default interface naming scheme 'v255'. Apr 21 10:21:23.300307 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 21 10:21:23.301140 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 21 10:21:23.302503 systemd-tmpfiles[1688]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 21 10:21:23.303628 systemd-tmpfiles[1688]: ACLs are not supported, ignoring. Apr 21 10:21:23.303833 systemd-tmpfiles[1688]: ACLs are not supported, ignoring. Apr 21 10:21:23.311477 systemd-tmpfiles[1688]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:21:23.311498 systemd-tmpfiles[1688]: Skipping /boot Apr 21 10:21:23.343492 systemd-tmpfiles[1688]: Detected autofs mount point /boot during canonicalization of boot. Apr 21 10:21:23.344813 systemd-tmpfiles[1688]: Skipping /boot Apr 21 10:21:23.388868 zram_generator::config[1722]: No configuration found. Apr 21 10:21:23.474032 (udev-worker)[1738]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:21:23.610815 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 21 10:21:23.617858 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 21 10:21:23.633779 kernel: ACPI: button: Power Button [PWRF] Apr 21 10:21:23.646908 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Apr 21 10:21:23.648963 kernel: ACPI: button: Sleep Button [SLPF] Apr 21 10:21:23.654797 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Apr 21 10:21:23.660560 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:21:23.740780 kernel: mousedev: PS/2 mouse device common for all mice Apr 21 10:21:23.747820 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1720) Apr 21 10:21:23.828539 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 21 10:21:23.828882 systemd[1]: Reloading finished in 577 ms. Apr 21 10:21:23.856340 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 21 10:21:23.862315 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 21 10:21:23.941530 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 21 10:21:23.949336 systemd[1]: Finished ensure-sysext.service. Apr 21 10:21:23.956445 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Apr 21 10:21:23.957291 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:21:23.962009 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:21:23.965117 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 21 10:21:23.965987 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 21 10:21:23.968963 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 21 10:21:23.976970 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 21 10:21:23.986415 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 21 10:21:23.990213 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 21 10:21:23.998444 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 21 10:21:24.001741 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 21 10:21:24.009428 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 21 10:21:24.027995 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 21 10:21:24.034886 lvm[1882]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:21:24.036968 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 21 10:21:24.049047 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 21 10:21:24.050525 systemd[1]: Reached target time-set.target - System Time Set. Apr 21 10:21:24.064058 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 21 10:21:24.074817 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 21 10:21:24.077904 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 21 10:21:24.079390 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 21 10:21:24.079735 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 21 10:21:24.081645 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 21 10:21:24.081905 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 21 10:21:24.083415 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 21 10:21:24.083620 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 21 10:21:24.090226 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 21 10:21:24.090435 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 21 10:21:24.091511 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 21 10:21:24.098610 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 21 10:21:24.107034 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 21 10:21:24.114999 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 21 10:21:24.116195 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 21 10:21:24.116282 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 21 10:21:24.122969 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 21 10:21:24.124150 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 21 10:21:24.142169 lvm[1912]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 21 10:21:24.140129 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 21 10:21:24.149258 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 21 10:21:24.173142 augenrules[1920]: No rules Apr 21 10:21:24.178617 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:21:24.191216 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 21 10:21:24.198660 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 21 10:21:24.210446 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 21 10:21:24.248264 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 21 10:21:24.250849 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 21 10:21:24.293237 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 21 10:21:24.310378 systemd-networkd[1897]: lo: Link UP Apr 21 10:21:24.310389 systemd-networkd[1897]: lo: Gained carrier Apr 21 10:21:24.312281 systemd-networkd[1897]: Enumeration completed Apr 21 10:21:24.312434 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 21 10:21:24.314285 systemd-networkd[1897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:21:24.314299 systemd-networkd[1897]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 21 10:21:24.317547 systemd-networkd[1897]: eth0: Link UP Apr 21 10:21:24.317808 systemd-networkd[1897]: eth0: Gained carrier Apr 21 10:21:24.317836 systemd-networkd[1897]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 21 10:21:24.324682 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 21 10:21:24.331864 systemd-networkd[1897]: eth0: DHCPv4 address 172.31.24.37/20, gateway 172.31.16.1 acquired from 172.31.16.1 Apr 21 10:21:24.335995 systemd-resolved[1898]: Positive Trust Anchors: Apr 21 10:21:24.336014 systemd-resolved[1898]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 21 10:21:24.336061 systemd-resolved[1898]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 21 10:21:24.341228 systemd-resolved[1898]: Defaulting to hostname 'linux'. Apr 21 10:21:24.343076 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 21 10:21:24.343693 systemd[1]: Reached target network.target - Network. Apr 21 10:21:24.344185 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 21 10:21:24.344584 systemd[1]: Reached target sysinit.target - System Initialization. Apr 21 10:21:24.345089 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 21 10:21:24.345498 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 21 10:21:24.346054 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 21 10:21:24.346541 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 21 10:21:24.346936 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 21 10:21:24.347302 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 21 10:21:24.347340 systemd[1]: Reached target paths.target - Path Units. Apr 21 10:21:24.347718 systemd[1]: Reached target timers.target - Timer Units. Apr 21 10:21:24.349314 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 21 10:21:24.351267 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 21 10:21:24.357034 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 21 10:21:24.358153 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 21 10:21:24.358687 systemd[1]: Reached target sockets.target - Socket Units. Apr 21 10:21:24.359139 systemd[1]: Reached target basic.target - Basic System. Apr 21 10:21:24.359549 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:21:24.359588 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 21 10:21:24.360918 systemd[1]: Starting containerd.service - containerd container runtime... Apr 21 10:21:24.364981 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 21 10:21:24.371000 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 21 10:21:24.373952 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 21 10:21:24.377972 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 21 10:21:24.378562 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 21 10:21:24.381979 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 21 10:21:24.384959 systemd[1]: Started ntpd.service - Network Time Service. Apr 21 10:21:24.398908 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 21 10:21:24.407998 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 21 10:21:24.414240 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 21 10:21:24.422145 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 21 10:21:24.431989 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 21 10:21:24.434420 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 21 10:21:24.436688 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 21 10:21:24.447985 systemd[1]: Starting update-engine.service - Update Engine... Apr 21 10:21:24.457597 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 21 10:21:24.484580 jq[1946]: false Apr 21 10:21:24.494281 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 21 10:21:24.495712 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 21 10:21:24.573993 jq[1959]: true Apr 21 10:21:24.574309 extend-filesystems[1947]: Found loop4 Apr 21 10:21:24.574309 extend-filesystems[1947]: Found loop5 Apr 21 10:21:24.574309 extend-filesystems[1947]: Found loop6 Apr 21 10:21:24.574309 extend-filesystems[1947]: Found loop7 Apr 21 10:21:24.574309 extend-filesystems[1947]: Found nvme0n1 Apr 21 10:21:24.574309 extend-filesystems[1947]: Found nvme0n1p1 Apr 21 10:21:24.574309 extend-filesystems[1947]: Found nvme0n1p2 Apr 21 10:21:24.574309 extend-filesystems[1947]: Found nvme0n1p3 Apr 21 10:21:24.574309 extend-filesystems[1947]: Found usr Apr 21 10:21:24.574309 extend-filesystems[1947]: Found nvme0n1p4 Apr 21 10:21:24.574309 extend-filesystems[1947]: Found nvme0n1p6 Apr 21 10:21:24.574309 extend-filesystems[1947]: Found nvme0n1p7 Apr 21 10:21:24.574309 extend-filesystems[1947]: Found nvme0n1p9 Apr 21 10:21:24.574309 extend-filesystems[1947]: Checking size of /dev/nvme0n1p9 Apr 21 10:21:24.642154 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Apr 21 10:21:24.642297 tar[1962]: linux-amd64/LICENSE Apr 21 10:21:24.642297 tar[1962]: linux-amd64/helm Apr 21 10:21:24.643093 update_engine[1957]: I20260421 10:21:24.596841 1957 main.cc:92] Flatcar Update Engine starting Apr 21 10:21:24.643093 update_engine[1957]: I20260421 10:21:24.618747 1957 update_check_scheduler.cc:74] Next update check in 7m21s Apr 21 10:21:24.650803 extend-filesystems[1947]: Resized partition /dev/nvme0n1p9 Apr 21 10:21:24.659235 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: ntpd 4.2.8p17@1.4004-o Tue Apr 21 08:10:59 UTC 2026 (1): Starting Apr 21 10:21:24.659235 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 21 10:21:24.659235 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: ---------------------------------------------------- Apr 21 10:21:24.659235 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: ntp-4 is maintained by Network Time Foundation, Apr 21 10:21:24.659235 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 21 10:21:24.659235 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: corporation. Support and training for ntp-4 are Apr 21 10:21:24.659235 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: available at https://www.nwtime.org/support Apr 21 10:21:24.659235 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: ---------------------------------------------------- Apr 21 10:21:24.659235 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: proto: precision = 0.096 usec (-23) Apr 21 10:21:24.659235 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: basedate set to 2026-04-09 Apr 21 10:21:24.659235 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: gps base set to 2026-04-12 (week 2414) Apr 21 10:21:24.578436 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 21 10:21:24.590404 dbus-daemon[1945]: [system] SELinux support is enabled Apr 21 10:21:24.662253 extend-filesystems[1991]: resize2fs 1.47.1 (20-May-2024) Apr 21 10:21:24.578693 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 21 10:21:24.604967 dbus-daemon[1945]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1897 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 21 10:21:24.672413 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: Listen and drop on 0 v6wildcard [::]:123 Apr 21 10:21:24.672413 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 21 10:21:24.672413 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: Listen normally on 2 lo 127.0.0.1:123 Apr 21 10:21:24.672413 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: Listen normally on 3 eth0 172.31.24.37:123 Apr 21 10:21:24.672413 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: Listen normally on 4 lo [::1]:123 Apr 21 10:21:24.672413 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: bind(21) AF_INET6 fe80::496:fbff:fe48:8eff%2#123 flags 0x11 failed: Cannot assign requested address Apr 21 10:21:24.672413 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: unable to create socket on eth0 (5) for fe80::496:fbff:fe48:8eff%2#123 Apr 21 10:21:24.672413 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: failed to init interface for address fe80::496:fbff:fe48:8eff%2 Apr 21 10:21:24.672413 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: Listening on routing socket on fd #21 for interface updates Apr 21 10:21:24.590662 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 21 10:21:24.625869 ntpd[1949]: ntpd 4.2.8p17@1.4004-o Tue Apr 21 08:10:59 UTC 2026 (1): Starting Apr 21 10:21:24.600363 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 21 10:21:24.685736 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:21:24.685736 ntpd[1949]: 21 Apr 10:21:24 ntpd[1949]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:21:24.625895 ntpd[1949]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 21 10:21:24.685925 jq[1983]: true Apr 21 10:21:24.600403 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 21 10:21:24.625907 ntpd[1949]: ---------------------------------------------------- Apr 21 10:21:24.602905 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 21 10:21:24.625917 ntpd[1949]: ntp-4 is maintained by Network Time Foundation, Apr 21 10:21:24.602932 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 21 10:21:24.625928 ntpd[1949]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 21 10:21:24.610080 systemd[1]: motdgen.service: Deactivated successfully. Apr 21 10:21:24.625938 ntpd[1949]: corporation. Support and training for ntp-4 are Apr 21 10:21:24.610201 systemd-logind[1955]: Watching system buttons on /dev/input/event1 (Power Button) Apr 21 10:21:24.625948 ntpd[1949]: available at https://www.nwtime.org/support Apr 21 10:21:24.610227 systemd-logind[1955]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 21 10:21:24.625957 ntpd[1949]: ---------------------------------------------------- Apr 21 10:21:24.610250 systemd-logind[1955]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 21 10:21:24.626114 dbus-daemon[1945]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 21 10:21:24.610318 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 21 10:21:24.639357 ntpd[1949]: proto: precision = 0.096 usec (-23) Apr 21 10:21:24.611295 systemd-logind[1955]: New seat seat0. Apr 21 10:21:24.645973 ntpd[1949]: basedate set to 2026-04-09 Apr 21 10:21:24.614671 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 21 10:21:24.645994 ntpd[1949]: gps base set to 2026-04-12 (week 2414) Apr 21 10:21:24.617224 systemd[1]: Started systemd-logind.service - User Login Management. Apr 21 10:21:24.668312 ntpd[1949]: Listen and drop on 0 v6wildcard [::]:123 Apr 21 10:21:24.623893 systemd[1]: Started update-engine.service - Update Engine. Apr 21 10:21:24.668369 ntpd[1949]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 21 10:21:24.624367 (ntainerd)[1975]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 21 10:21:24.668592 ntpd[1949]: Listen normally on 2 lo 127.0.0.1:123 Apr 21 10:21:24.634110 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 21 10:21:24.668632 ntpd[1949]: Listen normally on 3 eth0 172.31.24.37:123 Apr 21 10:21:24.656666 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 21 10:21:24.668675 ntpd[1949]: Listen normally on 4 lo [::1]:123 Apr 21 10:21:24.668724 ntpd[1949]: bind(21) AF_INET6 fe80::496:fbff:fe48:8eff%2#123 flags 0x11 failed: Cannot assign requested address Apr 21 10:21:24.668750 ntpd[1949]: unable to create socket on eth0 (5) for fe80::496:fbff:fe48:8eff%2#123 Apr 21 10:21:24.668779 ntpd[1949]: failed to init interface for address fe80::496:fbff:fe48:8eff%2 Apr 21 10:21:24.668812 ntpd[1949]: Listening on routing socket on fd #21 for interface updates Apr 21 10:21:24.675528 ntpd[1949]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:21:24.675565 ntpd[1949]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 21 10:21:24.801129 coreos-metadata[1944]: Apr 21 10:21:24.799 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 21 10:21:24.806414 coreos-metadata[1944]: Apr 21 10:21:24.806 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Apr 21 10:21:24.809427 coreos-metadata[1944]: Apr 21 10:21:24.808 INFO Fetch successful Apr 21 10:21:24.809427 coreos-metadata[1944]: Apr 21 10:21:24.808 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Apr 21 10:21:24.825948 coreos-metadata[1944]: Apr 21 10:21:24.814 INFO Fetch successful Apr 21 10:21:24.825948 coreos-metadata[1944]: Apr 21 10:21:24.814 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Apr 21 10:21:24.825948 coreos-metadata[1944]: Apr 21 10:21:24.816 INFO Fetch successful Apr 21 10:21:24.825948 coreos-metadata[1944]: Apr 21 10:21:24.816 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Apr 21 10:21:24.825948 coreos-metadata[1944]: Apr 21 10:21:24.817 INFO Fetch successful Apr 21 10:21:24.825948 coreos-metadata[1944]: Apr 21 10:21:24.817 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Apr 21 10:21:24.825948 coreos-metadata[1944]: Apr 21 10:21:24.819 INFO Fetch failed with 404: resource not found Apr 21 10:21:24.825948 coreos-metadata[1944]: Apr 21 10:21:24.819 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Apr 21 10:21:24.825948 coreos-metadata[1944]: Apr 21 10:21:24.822 INFO Fetch successful Apr 21 10:21:24.825948 coreos-metadata[1944]: Apr 21 10:21:24.822 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Apr 21 10:21:24.826563 coreos-metadata[1944]: Apr 21 10:21:24.826 INFO Fetch successful Apr 21 10:21:24.826563 coreos-metadata[1944]: Apr 21 10:21:24.826 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Apr 21 10:21:24.832808 coreos-metadata[1944]: Apr 21 10:21:24.832 INFO Fetch successful Apr 21 10:21:24.832808 coreos-metadata[1944]: Apr 21 10:21:24.832 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Apr 21 10:21:24.837493 coreos-metadata[1944]: Apr 21 10:21:24.837 INFO Fetch successful Apr 21 10:21:24.837493 coreos-metadata[1944]: Apr 21 10:21:24.837 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Apr 21 10:21:24.839552 coreos-metadata[1944]: Apr 21 10:21:24.839 INFO Fetch successful Apr 21 10:21:24.901312 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (1724) Apr 21 10:21:24.923052 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Apr 21 10:21:24.957057 bash[2022]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:21:24.957248 extend-filesystems[1991]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Apr 21 10:21:24.957248 extend-filesystems[1991]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 21 10:21:24.957248 extend-filesystems[1991]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Apr 21 10:21:24.961614 extend-filesystems[1947]: Resized filesystem in /dev/nvme0n1p9 Apr 21 10:21:24.961011 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 21 10:21:24.961268 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 21 10:21:24.968837 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 21 10:21:24.970789 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 21 10:21:24.975079 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 21 10:21:24.986774 systemd[1]: Starting sshkeys.service... Apr 21 10:21:25.063299 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 21 10:21:25.074225 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 21 10:21:25.145159 dbus-daemon[1945]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 21 10:21:25.145343 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 21 10:21:25.149028 dbus-daemon[1945]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1995 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 21 10:21:25.159862 systemd[1]: Starting polkit.service - Authorization Manager... Apr 21 10:21:25.234874 coreos-metadata[2051]: Apr 21 10:21:25.234 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Apr 21 10:21:25.236981 coreos-metadata[2051]: Apr 21 10:21:25.236 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Apr 21 10:21:25.237732 coreos-metadata[2051]: Apr 21 10:21:25.237 INFO Fetch successful Apr 21 10:21:25.239341 polkitd[2065]: Started polkitd version 121 Apr 21 10:21:25.244448 coreos-metadata[2051]: Apr 21 10:21:25.241 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 21 10:21:25.244448 coreos-metadata[2051]: Apr 21 10:21:25.241 INFO Fetch successful Apr 21 10:21:25.246149 unknown[2051]: wrote ssh authorized keys file for user: core Apr 21 10:21:25.264353 polkitd[2065]: Loading rules from directory /etc/polkit-1/rules.d Apr 21 10:21:25.264448 polkitd[2065]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 21 10:21:25.275614 polkitd[2065]: Finished loading, compiling and executing 2 rules Apr 21 10:21:25.283008 dbus-daemon[1945]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 21 10:21:25.283224 systemd[1]: Started polkit.service - Authorization Manager. Apr 21 10:21:25.289538 polkitd[2065]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 21 10:21:25.324801 sshd_keygen[1992]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 21 10:21:25.337337 update-ssh-keys[2102]: Updated "/home/core/.ssh/authorized_keys" Apr 21 10:21:25.338478 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 21 10:21:25.345173 systemd[1]: Finished sshkeys.service. Apr 21 10:21:25.356236 locksmithd[1993]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 21 10:21:25.371995 systemd-hostnamed[1995]: Hostname set to (transient) Apr 21 10:21:25.372122 systemd-resolved[1898]: System hostname changed to 'ip-172-31-24-37'. Apr 21 10:21:25.396043 containerd[1975]: time="2026-04-21T10:21:25.395157440Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 21 10:21:25.441228 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 21 10:21:25.452109 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 21 10:21:25.490351 systemd[1]: issuegen.service: Deactivated successfully. Apr 21 10:21:25.492099 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 21 10:21:25.501205 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 21 10:21:25.516907 containerd[1975]: time="2026-04-21T10:21:25.516817376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:21:25.518976 containerd[1975]: time="2026-04-21T10:21:25.518924707Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:21:25.518976 containerd[1975]: time="2026-04-21T10:21:25.518973506Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 21 10:21:25.519122 containerd[1975]: time="2026-04-21T10:21:25.518997762Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 21 10:21:25.519250 containerd[1975]: time="2026-04-21T10:21:25.519182010Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 21 10:21:25.519294 containerd[1975]: time="2026-04-21T10:21:25.519258957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 21 10:21:25.519361 containerd[1975]: time="2026-04-21T10:21:25.519339960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:21:25.519411 containerd[1975]: time="2026-04-21T10:21:25.519365735Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:21:25.519821 containerd[1975]: time="2026-04-21T10:21:25.519612152Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:21:25.519821 containerd[1975]: time="2026-04-21T10:21:25.519638163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 21 10:21:25.519821 containerd[1975]: time="2026-04-21T10:21:25.519661548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:21:25.519821 containerd[1975]: time="2026-04-21T10:21:25.519679397Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 21 10:21:25.520000 containerd[1975]: time="2026-04-21T10:21:25.519938725Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:21:25.522782 containerd[1975]: time="2026-04-21T10:21:25.520196155Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 21 10:21:25.522782 containerd[1975]: time="2026-04-21T10:21:25.520365016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 21 10:21:25.522782 containerd[1975]: time="2026-04-21T10:21:25.520390202Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 21 10:21:25.522782 containerd[1975]: time="2026-04-21T10:21:25.520485491Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 21 10:21:25.522782 containerd[1975]: time="2026-04-21T10:21:25.520538701Z" level=info msg="metadata content store policy set" policy=shared Apr 21 10:21:25.531834 containerd[1975]: time="2026-04-21T10:21:25.531789369Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 21 10:21:25.531934 containerd[1975]: time="2026-04-21T10:21:25.531872340Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 21 10:21:25.531934 containerd[1975]: time="2026-04-21T10:21:25.531899098Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 21 10:21:25.531934 containerd[1975]: time="2026-04-21T10:21:25.531920173Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 21 10:21:25.532048 containerd[1975]: time="2026-04-21T10:21:25.531940921Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 21 10:21:25.535528 containerd[1975]: time="2026-04-21T10:21:25.532135921Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 21 10:21:25.535528 containerd[1975]: time="2026-04-21T10:21:25.532495538Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 21 10:21:25.535528 containerd[1975]: time="2026-04-21T10:21:25.532635638Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 21 10:21:25.535528 containerd[1975]: time="2026-04-21T10:21:25.532658923Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 21 10:21:25.535528 containerd[1975]: time="2026-04-21T10:21:25.532682610Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 21 10:21:25.535528 containerd[1975]: time="2026-04-21T10:21:25.532704236Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 21 10:21:25.535528 containerd[1975]: time="2026-04-21T10:21:25.532727104Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 21 10:21:25.535528 containerd[1975]: time="2026-04-21T10:21:25.532750333Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 21 10:21:25.535528 containerd[1975]: time="2026-04-21T10:21:25.532839293Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 21 10:21:25.535528 containerd[1975]: time="2026-04-21T10:21:25.532864152Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 21 10:21:25.535528 containerd[1975]: time="2026-04-21T10:21:25.532885508Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 21 10:21:25.535528 containerd[1975]: time="2026-04-21T10:21:25.532905522Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 21 10:21:25.535528 containerd[1975]: time="2026-04-21T10:21:25.532928418Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 21 10:21:25.535528 containerd[1975]: time="2026-04-21T10:21:25.532962802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 21 10:21:25.536106 containerd[1975]: time="2026-04-21T10:21:25.532986621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 21 10:21:25.536106 containerd[1975]: time="2026-04-21T10:21:25.533008783Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 21 10:21:25.536106 containerd[1975]: time="2026-04-21T10:21:25.533039721Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 21 10:21:25.536106 containerd[1975]: time="2026-04-21T10:21:25.533059460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 21 10:21:25.536106 containerd[1975]: time="2026-04-21T10:21:25.533086360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 21 10:21:25.536106 containerd[1975]: time="2026-04-21T10:21:25.533105162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 21 10:21:25.536106 containerd[1975]: time="2026-04-21T10:21:25.533124922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 21 10:21:25.536106 containerd[1975]: time="2026-04-21T10:21:25.533145729Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 21 10:21:25.536106 containerd[1975]: time="2026-04-21T10:21:25.533168709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 21 10:21:25.536106 containerd[1975]: time="2026-04-21T10:21:25.533190694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 21 10:21:25.536106 containerd[1975]: time="2026-04-21T10:21:25.533211505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 21 10:21:25.536106 containerd[1975]: time="2026-04-21T10:21:25.533239777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 21 10:21:25.536106 containerd[1975]: time="2026-04-21T10:21:25.533265081Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 21 10:21:25.536106 containerd[1975]: time="2026-04-21T10:21:25.533298600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 21 10:21:25.536106 containerd[1975]: time="2026-04-21T10:21:25.533321198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 21 10:21:25.536602 containerd[1975]: time="2026-04-21T10:21:25.533342017Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 21 10:21:25.536602 containerd[1975]: time="2026-04-21T10:21:25.533433279Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 21 10:21:25.536602 containerd[1975]: time="2026-04-21T10:21:25.533462535Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 21 10:21:25.536602 containerd[1975]: time="2026-04-21T10:21:25.533546681Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 21 10:21:25.536602 containerd[1975]: time="2026-04-21T10:21:25.533567415Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 21 10:21:25.536602 containerd[1975]: time="2026-04-21T10:21:25.533582232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 21 10:21:25.536602 containerd[1975]: time="2026-04-21T10:21:25.533599422Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 21 10:21:25.536602 containerd[1975]: time="2026-04-21T10:21:25.533613010Z" level=info msg="NRI interface is disabled by configuration." Apr 21 10:21:25.536602 containerd[1975]: time="2026-04-21T10:21:25.533626651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 21 10:21:25.536980 containerd[1975]: time="2026-04-21T10:21:25.534051269Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 21 10:21:25.536980 containerd[1975]: time="2026-04-21T10:21:25.534140941Z" level=info msg="Connect containerd service" Apr 21 10:21:25.536980 containerd[1975]: time="2026-04-21T10:21:25.534194907Z" level=info msg="using legacy CRI server" Apr 21 10:21:25.536980 containerd[1975]: time="2026-04-21T10:21:25.534205769Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 21 10:21:25.536980 containerd[1975]: time="2026-04-21T10:21:25.534368110Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 21 10:21:25.536980 containerd[1975]: time="2026-04-21T10:21:25.536522211Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 21 10:21:25.540689 containerd[1975]: time="2026-04-21T10:21:25.537325680Z" level=info msg="Start subscribing containerd event" Apr 21 10:21:25.540689 containerd[1975]: time="2026-04-21T10:21:25.537406848Z" level=info msg="Start recovering state" Apr 21 10:21:25.540689 containerd[1975]: time="2026-04-21T10:21:25.537488532Z" level=info msg="Start event monitor" Apr 21 10:21:25.540689 containerd[1975]: time="2026-04-21T10:21:25.537514712Z" level=info msg="Start snapshots syncer" Apr 21 10:21:25.540689 containerd[1975]: time="2026-04-21T10:21:25.537527890Z" level=info msg="Start cni network conf syncer for default" Apr 21 10:21:25.540689 containerd[1975]: time="2026-04-21T10:21:25.537540800Z" level=info msg="Start streaming server" Apr 21 10:21:25.540689 containerd[1975]: time="2026-04-21T10:21:25.539375337Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 21 10:21:25.540689 containerd[1975]: time="2026-04-21T10:21:25.539437344Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 21 10:21:25.540689 containerd[1975]: time="2026-04-21T10:21:25.540665508Z" level=info msg="containerd successfully booted in 0.148185s" Apr 21 10:21:25.540032 systemd[1]: Started containerd.service - containerd container runtime. Apr 21 10:21:25.546207 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 21 10:21:25.566975 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 21 10:21:25.578951 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 21 10:21:25.580322 systemd[1]: Reached target getty.target - Login Prompts. Apr 21 10:21:25.626398 ntpd[1949]: bind(24) AF_INET6 fe80::496:fbff:fe48:8eff%2#123 flags 0x11 failed: Cannot assign requested address Apr 21 10:21:25.626448 ntpd[1949]: unable to create socket on eth0 (6) for fe80::496:fbff:fe48:8eff%2#123 Apr 21 10:21:25.626804 ntpd[1949]: 21 Apr 10:21:25 ntpd[1949]: bind(24) AF_INET6 fe80::496:fbff:fe48:8eff%2#123 flags 0x11 failed: Cannot assign requested address Apr 21 10:21:25.626804 ntpd[1949]: 21 Apr 10:21:25 ntpd[1949]: unable to create socket on eth0 (6) for fe80::496:fbff:fe48:8eff%2#123 Apr 21 10:21:25.626804 ntpd[1949]: 21 Apr 10:21:25 ntpd[1949]: failed to init interface for address fe80::496:fbff:fe48:8eff%2 Apr 21 10:21:25.626463 ntpd[1949]: failed to init interface for address fe80::496:fbff:fe48:8eff%2 Apr 21 10:21:25.747912 systemd-networkd[1897]: eth0: Gained IPv6LL Apr 21 10:21:25.752382 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 21 10:21:25.753948 systemd[1]: Reached target network-online.target - Network is Online. Apr 21 10:21:25.762914 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Apr 21 10:21:25.772114 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:21:25.780272 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 21 10:21:25.828745 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 21 10:21:25.861027 amazon-ssm-agent[2165]: Initializing new seelog logger Apr 21 10:21:25.861780 amazon-ssm-agent[2165]: New Seelog Logger Creation Complete Apr 21 10:21:25.861780 amazon-ssm-agent[2165]: 2026/04/21 10:21:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:21:25.861780 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:21:25.862773 amazon-ssm-agent[2165]: 2026/04/21 10:21:25 processing appconfig overrides Apr 21 10:21:25.862773 amazon-ssm-agent[2165]: 2026/04/21 10:21:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:21:25.862773 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:21:25.862773 amazon-ssm-agent[2165]: 2026/04/21 10:21:25 processing appconfig overrides Apr 21 10:21:25.863272 amazon-ssm-agent[2165]: 2026/04/21 10:21:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:21:25.863337 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:21:25.863470 amazon-ssm-agent[2165]: 2026/04/21 10:21:25 processing appconfig overrides Apr 21 10:21:25.864693 amazon-ssm-agent[2165]: 2026-04-21 10:21:25 INFO Proxy environment variables: Apr 21 10:21:25.867526 amazon-ssm-agent[2165]: 2026/04/21 10:21:25 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:21:25.869775 amazon-ssm-agent[2165]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Apr 21 10:21:25.869775 amazon-ssm-agent[2165]: 2026/04/21 10:21:25 processing appconfig overrides Apr 21 10:21:25.965007 amazon-ssm-agent[2165]: 2026-04-21 10:21:25 INFO no_proxy: Apr 21 10:21:25.985711 tar[1962]: linux-amd64/README.md Apr 21 10:21:26.002462 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 21 10:21:26.063253 amazon-ssm-agent[2165]: 2026-04-21 10:21:25 INFO https_proxy: Apr 21 10:21:26.160817 amazon-ssm-agent[2165]: 2026-04-21 10:21:25 INFO http_proxy: Apr 21 10:21:26.161016 amazon-ssm-agent[2165]: 2026-04-21 10:21:25 INFO Checking if agent identity type OnPrem can be assumed Apr 21 10:21:26.161120 amazon-ssm-agent[2165]: 2026-04-21 10:21:25 INFO Checking if agent identity type EC2 can be assumed Apr 21 10:21:26.161183 amazon-ssm-agent[2165]: 2026-04-21 10:21:25 INFO Agent will take identity from EC2 Apr 21 10:21:26.161222 amazon-ssm-agent[2165]: 2026-04-21 10:21:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 21 10:21:26.161272 amazon-ssm-agent[2165]: 2026-04-21 10:21:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 21 10:21:26.161318 amazon-ssm-agent[2165]: 2026-04-21 10:21:25 INFO [amazon-ssm-agent] using named pipe channel for IPC Apr 21 10:21:26.161367 amazon-ssm-agent[2165]: 2026-04-21 10:21:25 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Apr 21 10:21:26.161404 amazon-ssm-agent[2165]: 2026-04-21 10:21:25 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Apr 21 10:21:26.161485 amazon-ssm-agent[2165]: 2026-04-21 10:21:25 INFO [amazon-ssm-agent] Starting Core Agent Apr 21 10:21:26.161527 amazon-ssm-agent[2165]: 2026-04-21 10:21:25 INFO [amazon-ssm-agent] registrar detected. Attempting registration Apr 21 10:21:26.161603 amazon-ssm-agent[2165]: 2026-04-21 10:21:25 INFO [Registrar] Starting registrar module Apr 21 10:21:26.161654 amazon-ssm-agent[2165]: 2026-04-21 10:21:25 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Apr 21 10:21:26.161698 amazon-ssm-agent[2165]: 2026-04-21 10:21:26 INFO [EC2Identity] EC2 registration was successful. Apr 21 10:21:26.161735 amazon-ssm-agent[2165]: 2026-04-21 10:21:26 INFO [CredentialRefresher] credentialRefresher has started Apr 21 10:21:26.161826 amazon-ssm-agent[2165]: 2026-04-21 10:21:26 INFO [CredentialRefresher] Starting credentials refresher loop Apr 21 10:21:26.161826 amazon-ssm-agent[2165]: 2026-04-21 10:21:26 INFO EC2RoleProvider Successfully connected with instance profile role credentials Apr 21 10:21:26.161917 amazon-ssm-agent[2165]: 2026-04-21 10:21:26 INFO [CredentialRefresher] Next credential rotation will be in 31.78331056865 minutes Apr 21 10:21:27.175802 amazon-ssm-agent[2165]: 2026-04-21 10:21:27 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Apr 21 10:21:27.277517 amazon-ssm-agent[2165]: 2026-04-21 10:21:27 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2187) started Apr 21 10:21:27.377818 amazon-ssm-agent[2165]: 2026-04-21 10:21:27 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Apr 21 10:21:27.762386 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:21:27.764537 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 21 10:21:27.766873 systemd[1]: Startup finished in 614ms (kernel) + 5.973s (initrd) + 7.136s (userspace) = 13.725s. Apr 21 10:21:27.773798 (kubelet)[2202]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:21:28.626361 ntpd[1949]: Listen normally on 7 eth0 [fe80::496:fbff:fe48:8eff%2]:123 Apr 21 10:21:28.626776 ntpd[1949]: 21 Apr 10:21:28 ntpd[1949]: Listen normally on 7 eth0 [fe80::496:fbff:fe48:8eff%2]:123 Apr 21 10:21:29.108824 kubelet[2202]: E0421 10:21:29.108751 2202 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:21:29.112406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:21:29.112622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:21:29.112999 systemd[1]: kubelet.service: Consumed 1.103s CPU time. Apr 21 10:21:29.822906 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 21 10:21:29.828139 systemd[1]: Started sshd@0-172.31.24.37:22-50.85.169.122:38028.service - OpenSSH per-connection server daemon (50.85.169.122:38028). Apr 21 10:21:30.839052 sshd[2214]: Accepted publickey for core from 50.85.169.122 port 38028 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:21:30.841891 sshd[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:30.853118 systemd-logind[1955]: New session 1 of user core. Apr 21 10:21:30.854727 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 21 10:21:30.867200 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 21 10:21:30.881456 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 21 10:21:30.889203 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 21 10:21:30.894806 (systemd)[2218]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 21 10:21:31.020837 systemd[2218]: Queued start job for default target default.target. Apr 21 10:21:31.028234 systemd[2218]: Created slice app.slice - User Application Slice. Apr 21 10:21:31.028282 systemd[2218]: Reached target paths.target - Paths. Apr 21 10:21:31.028305 systemd[2218]: Reached target timers.target - Timers. Apr 21 10:21:31.030037 systemd[2218]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 21 10:21:31.048997 systemd[2218]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 21 10:21:31.049153 systemd[2218]: Reached target sockets.target - Sockets. Apr 21 10:21:31.049175 systemd[2218]: Reached target basic.target - Basic System. Apr 21 10:21:31.049231 systemd[2218]: Reached target default.target - Main User Target. Apr 21 10:21:31.049272 systemd[2218]: Startup finished in 146ms. Apr 21 10:21:31.049617 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 21 10:21:31.062049 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 21 10:21:32.874422 systemd-resolved[1898]: Clock change detected. Flushing caches. Apr 21 10:21:33.016910 systemd[1]: Started sshd@1-172.31.24.37:22-50.85.169.122:38036.service - OpenSSH per-connection server daemon (50.85.169.122:38036). Apr 21 10:21:34.000941 sshd[2229]: Accepted publickey for core from 50.85.169.122 port 38036 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:21:34.002620 sshd[2229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:34.007122 systemd-logind[1955]: New session 2 of user core. Apr 21 10:21:34.019855 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 21 10:21:34.688729 sshd[2229]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:34.692256 systemd[1]: sshd@1-172.31.24.37:22-50.85.169.122:38036.service: Deactivated successfully. Apr 21 10:21:34.694022 systemd[1]: session-2.scope: Deactivated successfully. Apr 21 10:21:34.695877 systemd-logind[1955]: Session 2 logged out. Waiting for processes to exit. Apr 21 10:21:34.697211 systemd-logind[1955]: Removed session 2. Apr 21 10:21:34.875892 systemd[1]: Started sshd@2-172.31.24.37:22-50.85.169.122:38042.service - OpenSSH per-connection server daemon (50.85.169.122:38042). Apr 21 10:21:35.892173 sshd[2236]: Accepted publickey for core from 50.85.169.122 port 38042 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:21:35.893779 sshd[2236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:35.899055 systemd-logind[1955]: New session 3 of user core. Apr 21 10:21:35.905787 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 21 10:21:36.596978 sshd[2236]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:36.601318 systemd[1]: sshd@2-172.31.24.37:22-50.85.169.122:38042.service: Deactivated successfully. Apr 21 10:21:36.603107 systemd[1]: session-3.scope: Deactivated successfully. Apr 21 10:21:36.603873 systemd-logind[1955]: Session 3 logged out. Waiting for processes to exit. Apr 21 10:21:36.605020 systemd-logind[1955]: Removed session 3. Apr 21 10:21:36.766978 systemd[1]: Started sshd@3-172.31.24.37:22-50.85.169.122:38044.service - OpenSSH per-connection server daemon (50.85.169.122:38044). Apr 21 10:21:37.749689 sshd[2243]: Accepted publickey for core from 50.85.169.122 port 38044 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:21:37.750374 sshd[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:37.755774 systemd-logind[1955]: New session 4 of user core. Apr 21 10:21:37.761776 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 21 10:21:38.435576 sshd[2243]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:38.438791 systemd[1]: sshd@3-172.31.24.37:22-50.85.169.122:38044.service: Deactivated successfully. Apr 21 10:21:38.441152 systemd[1]: session-4.scope: Deactivated successfully. Apr 21 10:21:38.442605 systemd-logind[1955]: Session 4 logged out. Waiting for processes to exit. Apr 21 10:21:38.443971 systemd-logind[1955]: Removed session 4. Apr 21 10:21:38.613046 systemd[1]: Started sshd@4-172.31.24.37:22-50.85.169.122:38050.service - OpenSSH per-connection server daemon (50.85.169.122:38050). Apr 21 10:21:39.628035 sshd[2250]: Accepted publickey for core from 50.85.169.122 port 38050 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:21:39.629517 sshd[2250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:39.634832 systemd-logind[1955]: New session 5 of user core. Apr 21 10:21:39.649782 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 21 10:21:40.179805 sudo[2253]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 21 10:21:40.180264 sudo[2253]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:21:40.197435 sudo[2253]: pam_unix(sudo:session): session closed for user root Apr 21 10:21:40.361990 sshd[2250]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:40.365934 systemd[1]: sshd@4-172.31.24.37:22-50.85.169.122:38050.service: Deactivated successfully. Apr 21 10:21:40.368228 systemd[1]: session-5.scope: Deactivated successfully. Apr 21 10:21:40.369262 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 21 10:21:40.370824 systemd-logind[1955]: Session 5 logged out. Waiting for processes to exit. Apr 21 10:21:40.376784 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:21:40.378041 systemd-logind[1955]: Removed session 5. Apr 21 10:21:40.544653 systemd[1]: Started sshd@5-172.31.24.37:22-50.85.169.122:39444.service - OpenSSH per-connection server daemon (50.85.169.122:39444). Apr 21 10:21:40.597179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:21:40.612069 (kubelet)[2268]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:21:40.657723 kubelet[2268]: E0421 10:21:40.657669 2268 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:21:40.661797 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:21:40.662003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:21:41.548786 sshd[2261]: Accepted publickey for core from 50.85.169.122 port 39444 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:21:41.549501 sshd[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:41.554057 systemd-logind[1955]: New session 6 of user core. Apr 21 10:21:41.564784 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 21 10:21:42.083798 sudo[2277]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 21 10:21:42.084284 sudo[2277]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:21:42.088381 sudo[2277]: pam_unix(sudo:session): session closed for user root Apr 21 10:21:42.094057 sudo[2276]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 21 10:21:42.094447 sudo[2276]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:21:42.107428 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 21 10:21:42.111811 auditctl[2280]: No rules Apr 21 10:21:42.112328 systemd[1]: audit-rules.service: Deactivated successfully. Apr 21 10:21:42.112601 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 21 10:21:42.115440 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 21 10:21:42.153963 augenrules[2298]: No rules Apr 21 10:21:42.155442 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 21 10:21:42.156722 sudo[2276]: pam_unix(sudo:session): session closed for user root Apr 21 10:21:42.321203 sshd[2261]: pam_unix(sshd:session): session closed for user core Apr 21 10:21:42.325843 systemd-logind[1955]: Session 6 logged out. Waiting for processes to exit. Apr 21 10:21:42.326778 systemd[1]: sshd@5-172.31.24.37:22-50.85.169.122:39444.service: Deactivated successfully. Apr 21 10:21:42.329068 systemd[1]: session-6.scope: Deactivated successfully. Apr 21 10:21:42.330038 systemd-logind[1955]: Removed session 6. Apr 21 10:21:42.493934 systemd[1]: Started sshd@6-172.31.24.37:22-50.85.169.122:39452.service - OpenSSH per-connection server daemon (50.85.169.122:39452). Apr 21 10:21:43.484559 sshd[2306]: Accepted publickey for core from 50.85.169.122 port 39452 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:21:43.485256 sshd[2306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:21:43.490432 systemd-logind[1955]: New session 7 of user core. Apr 21 10:21:43.496794 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 21 10:21:44.012882 sudo[2309]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 21 10:21:44.013282 sudo[2309]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 21 10:21:44.407885 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 21 10:21:44.410142 (dockerd)[2326]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 21 10:21:44.782579 dockerd[2326]: time="2026-04-21T10:21:44.782427929Z" level=info msg="Starting up" Apr 21 10:21:44.893645 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2558679414-merged.mount: Deactivated successfully. Apr 21 10:21:44.933842 dockerd[2326]: time="2026-04-21T10:21:44.933432847Z" level=info msg="Loading containers: start." Apr 21 10:21:45.063551 kernel: Initializing XFRM netlink socket Apr 21 10:21:45.093493 (udev-worker)[2347]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:21:45.159780 systemd-networkd[1897]: docker0: Link UP Apr 21 10:21:45.191614 dockerd[2326]: time="2026-04-21T10:21:45.191555298Z" level=info msg="Loading containers: done." Apr 21 10:21:45.213815 dockerd[2326]: time="2026-04-21T10:21:45.213754224Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 21 10:21:45.214123 dockerd[2326]: time="2026-04-21T10:21:45.213886610Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 21 10:21:45.214123 dockerd[2326]: time="2026-04-21T10:21:45.214033289Z" level=info msg="Daemon has completed initialization" Apr 21 10:21:45.262376 dockerd[2326]: time="2026-04-21T10:21:45.261974989Z" level=info msg="API listen on /run/docker.sock" Apr 21 10:21:45.262086 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 21 10:21:46.177740 containerd[1975]: time="2026-04-21T10:21:46.177694348Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\"" Apr 21 10:21:46.909199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount334288805.mount: Deactivated successfully. Apr 21 10:21:48.525010 containerd[1975]: time="2026-04-21T10:21:48.524952923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:48.526374 containerd[1975]: time="2026-04-21T10:21:48.526327273Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.11: active requests=0, bytes read=30193989" Apr 21 10:21:48.527657 containerd[1975]: time="2026-04-21T10:21:48.527617681Z" level=info msg="ImageCreate event name:\"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:48.530977 containerd[1975]: time="2026-04-21T10:21:48.530612511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:48.531959 containerd[1975]: time="2026-04-21T10:21:48.531915356Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.11\" with image id \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:18e9f2b6e4d67c24941e14b2d41ec0aa6e5f628e39f2ef2163e176de85bbe39e\", size \"30190588\" in 2.354176642s" Apr 21 10:21:48.532061 containerd[1975]: time="2026-04-21T10:21:48.531966502Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.11\" returns image reference \"sha256:7ea99c30f23b106a042b6c46e565fddb42b20bbe58ba6852e562eed03477aec2\"" Apr 21 10:21:48.532630 containerd[1975]: time="2026-04-21T10:21:48.532600983Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\"" Apr 21 10:21:50.415192 containerd[1975]: time="2026-04-21T10:21:50.415034810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:50.417369 containerd[1975]: time="2026-04-21T10:21:50.417203506Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.11: active requests=0, bytes read=26171447" Apr 21 10:21:50.419989 containerd[1975]: time="2026-04-21T10:21:50.419634149Z" level=info msg="ImageCreate event name:\"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:50.423994 containerd[1975]: time="2026-04-21T10:21:50.423925090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:50.425220 containerd[1975]: time="2026-04-21T10:21:50.425074368Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.11\" with image id \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7579451c5b3c2715da4a263c5d80a3367a24fdc12e86fde6851674d567d1dfb2\", size \"27737794\" in 1.892439078s" Apr 21 10:21:50.425220 containerd[1975]: time="2026-04-21T10:21:50.425120160Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.11\" returns image reference \"sha256:c75dc8a6c47e2f7491fa2e367879f53c6f46053066e6b7135df4b154ddd94a1f\"" Apr 21 10:21:50.425805 containerd[1975]: time="2026-04-21T10:21:50.425774586Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\"" Apr 21 10:21:50.808466 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 21 10:21:50.813797 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:21:51.032750 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:21:51.045067 (kubelet)[2534]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:21:51.098965 kubelet[2534]: E0421 10:21:51.098573 2534 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:21:51.102785 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:21:51.102982 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:21:51.899371 containerd[1975]: time="2026-04-21T10:21:51.899313282Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:51.901793 containerd[1975]: time="2026-04-21T10:21:51.901551610Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.11: active requests=0, bytes read=20289756" Apr 21 10:21:51.904215 containerd[1975]: time="2026-04-21T10:21:51.904133777Z" level=info msg="ImageCreate event name:\"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:51.908894 containerd[1975]: time="2026-04-21T10:21:51.908823223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:51.910357 containerd[1975]: time="2026-04-21T10:21:51.910066481Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.11\" with image id \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5506f0f94c4d9aeb071664893aabc12166bcb7f775008a6fff02d004e6091d28\", size \"21856121\" in 1.484256268s" Apr 21 10:21:51.910357 containerd[1975]: time="2026-04-21T10:21:51.910118487Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.11\" returns image reference \"sha256:3febad3451e2d599688a8ad13d19d03c48c9054be209342c748fac2bb6c56f97\"" Apr 21 10:21:51.911495 containerd[1975]: time="2026-04-21T10:21:51.911466449Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\"" Apr 21 10:21:53.249005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3494778462.mount: Deactivated successfully. Apr 21 10:21:53.854465 containerd[1975]: time="2026-04-21T10:21:53.854405280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:53.856488 containerd[1975]: time="2026-04-21T10:21:53.856430440Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.11: active requests=0, bytes read=32010711" Apr 21 10:21:53.858878 containerd[1975]: time="2026-04-21T10:21:53.858806924Z" level=info msg="ImageCreate event name:\"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:53.862870 containerd[1975]: time="2026-04-21T10:21:53.862807480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:53.863948 containerd[1975]: time="2026-04-21T10:21:53.863738735Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.11\" with image id \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\", repo tag \"registry.k8s.io/kube-proxy:v1.33.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d18637b5c5f58a4ca0163d3cf184e53d4c522963c242860562be7cb25e9303e\", size \"32009730\" in 1.95223258s" Apr 21 10:21:53.863948 containerd[1975]: time="2026-04-21T10:21:53.863788640Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.11\" returns image reference \"sha256:4ce1332df15d2a0b1c2d3b18292afb4ff670070401211daebb00b7293b26f6d0\"" Apr 21 10:21:53.864833 containerd[1975]: time="2026-04-21T10:21:53.864660300Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 21 10:21:54.462532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2033590690.mount: Deactivated successfully. Apr 21 10:21:55.917105 containerd[1975]: time="2026-04-21T10:21:55.917041171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:55.919167 containerd[1975]: time="2026-04-21T10:21:55.919093037Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Apr 21 10:21:55.921694 containerd[1975]: time="2026-04-21T10:21:55.921627521Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:55.927190 containerd[1975]: time="2026-04-21T10:21:55.927118109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:55.928605 containerd[1975]: time="2026-04-21T10:21:55.928402368Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.063699282s" Apr 21 10:21:55.928605 containerd[1975]: time="2026-04-21T10:21:55.928466020Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 21 10:21:55.929674 containerd[1975]: time="2026-04-21T10:21:55.929637762Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 21 10:21:56.463243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount428230420.mount: Deactivated successfully. Apr 21 10:21:56.475860 containerd[1975]: time="2026-04-21T10:21:56.475796913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:56.477853 containerd[1975]: time="2026-04-21T10:21:56.477771301Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Apr 21 10:21:56.480160 containerd[1975]: time="2026-04-21T10:21:56.480093988Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:56.483939 containerd[1975]: time="2026-04-21T10:21:56.483894689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:56.485360 containerd[1975]: time="2026-04-21T10:21:56.484650987Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 554.976796ms" Apr 21 10:21:56.485360 containerd[1975]: time="2026-04-21T10:21:56.484696238Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 21 10:21:56.485360 containerd[1975]: time="2026-04-21T10:21:56.485210740Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 21 10:21:56.647239 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 21 10:21:57.086211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2204055074.mount: Deactivated successfully. Apr 21 10:21:58.426927 containerd[1975]: time="2026-04-21T10:21:58.426860285Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:58.428810 containerd[1975]: time="2026-04-21T10:21:58.428718422Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719426" Apr 21 10:21:58.431185 containerd[1975]: time="2026-04-21T10:21:58.431112886Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:58.435492 containerd[1975]: time="2026-04-21T10:21:58.435426820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:21:58.437071 containerd[1975]: time="2026-04-21T10:21:58.436628709Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.951386762s" Apr 21 10:21:58.437071 containerd[1975]: time="2026-04-21T10:21:58.436675492Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 21 10:22:01.312091 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 21 10:22:01.320921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:22:01.620623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:22:01.631972 (kubelet)[2704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 21 10:22:01.697368 kubelet[2704]: E0421 10:22:01.697316 2704 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 21 10:22:01.700232 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 21 10:22:01.700447 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 21 10:22:03.728448 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:22:03.735907 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:22:03.773487 systemd[1]: Reloading requested from client PID 2718 ('systemctl') (unit session-7.scope)... Apr 21 10:22:03.773506 systemd[1]: Reloading... Apr 21 10:22:03.895554 zram_generator::config[2758]: No configuration found. Apr 21 10:22:04.062716 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:22:04.149162 systemd[1]: Reloading finished in 374 ms. Apr 21 10:22:04.212172 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 21 10:22:04.212295 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 21 10:22:04.212690 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:22:04.219115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:22:04.494590 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:22:04.505614 (kubelet)[2821]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:22:04.565924 kubelet[2821]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:22:04.565924 kubelet[2821]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:22:04.565924 kubelet[2821]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:22:04.566416 kubelet[2821]: I0421 10:22:04.565983 2821 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:22:04.890729 kubelet[2821]: I0421 10:22:04.890592 2821 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:22:04.890729 kubelet[2821]: I0421 10:22:04.890718 2821 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:22:04.892652 kubelet[2821]: I0421 10:22:04.892596 2821 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:22:04.991367 kubelet[2821]: I0421 10:22:04.991324 2821 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:22:04.998292 kubelet[2821]: E0421 10:22:04.998190 2821 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.24.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.37:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:22:05.001922 kubelet[2821]: E0421 10:22:05.001876 2821 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:22:05.001922 kubelet[2821]: I0421 10:22:05.001916 2821 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:22:05.016477 kubelet[2821]: I0421 10:22:05.016445 2821 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:22:05.024981 kubelet[2821]: I0421 10:22:05.024873 2821 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:22:05.036045 kubelet[2821]: I0421 10:22:05.024974 2821 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-37","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:22:05.036045 kubelet[2821]: I0421 10:22:05.036045 2821 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:22:05.036311 kubelet[2821]: I0421 10:22:05.036069 2821 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:22:05.036311 kubelet[2821]: I0421 10:22:05.036263 2821 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:22:05.045371 kubelet[2821]: I0421 10:22:05.045321 2821 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:22:05.045704 kubelet[2821]: I0421 10:22:05.045681 2821 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:22:05.045789 kubelet[2821]: I0421 10:22:05.045735 2821 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:22:05.045789 kubelet[2821]: I0421 10:22:05.045762 2821 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:22:05.053552 kubelet[2821]: E0421 10:22:05.051419 2821 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-37&limit=500&resourceVersion=0\": dial tcp 172.31.24.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:22:05.053552 kubelet[2821]: E0421 10:22:05.053267 2821 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:22:05.053552 kubelet[2821]: I0421 10:22:05.053492 2821 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:22:05.054298 kubelet[2821]: I0421 10:22:05.054262 2821 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:22:05.059475 kubelet[2821]: W0421 10:22:05.059437 2821 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 21 10:22:05.067865 kubelet[2821]: I0421 10:22:05.067825 2821 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:22:05.068030 kubelet[2821]: I0421 10:22:05.067896 2821 server.go:1289] "Started kubelet" Apr 21 10:22:05.080624 kubelet[2821]: E0421 10:22:05.078484 2821 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.37:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.37:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-37.18a8581ac9552453 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-37,UID:ip-172-31-24-37,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-37,},FirstTimestamp:2026-04-21 10:22:05.067854931 +0000 UTC m=+0.550361245,LastTimestamp:2026-04-21 10:22:05.067854931 +0000 UTC m=+0.550361245,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-37,}" Apr 21 10:22:05.082557 kubelet[2821]: I0421 10:22:05.081765 2821 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:22:05.083429 kubelet[2821]: I0421 10:22:05.083374 2821 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:22:05.083994 kubelet[2821]: I0421 10:22:05.083974 2821 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:22:05.085565 kubelet[2821]: I0421 10:22:05.085508 2821 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:22:05.089984 kubelet[2821]: I0421 10:22:05.089951 2821 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:22:05.091504 kubelet[2821]: I0421 10:22:05.091297 2821 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:22:05.100497 kubelet[2821]: I0421 10:22:05.098631 2821 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:22:05.102408 kubelet[2821]: E0421 10:22:05.101974 2821 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-37\" not found" Apr 21 10:22:05.102557 kubelet[2821]: E0421 10:22:05.102389 2821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-37?timeout=10s\": dial tcp 172.31.24.37:6443: connect: connection refused" interval="200ms" Apr 21 10:22:05.105011 kubelet[2821]: I0421 10:22:05.104970 2821 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:22:05.105127 kubelet[2821]: I0421 10:22:05.105038 2821 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:22:05.105985 kubelet[2821]: E0421 10:22:05.105939 2821 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:22:05.108334 kubelet[2821]: E0421 10:22:05.108304 2821 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:22:05.109039 kubelet[2821]: I0421 10:22:05.109011 2821 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:22:05.109039 kubelet[2821]: I0421 10:22:05.109026 2821 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:22:05.109173 kubelet[2821]: I0421 10:22:05.109102 2821 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:22:05.123961 kubelet[2821]: I0421 10:22:05.123769 2821 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:22:05.125598 kubelet[2821]: I0421 10:22:05.125240 2821 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:22:05.127481 kubelet[2821]: I0421 10:22:05.127078 2821 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:22:05.127481 kubelet[2821]: I0421 10:22:05.127133 2821 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:22:05.127481 kubelet[2821]: I0421 10:22:05.127144 2821 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:22:05.127481 kubelet[2821]: E0421 10:22:05.127201 2821 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:22:05.134625 kubelet[2821]: E0421 10:22:05.134585 2821 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:22:05.142033 kubelet[2821]: I0421 10:22:05.141931 2821 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:22:05.142033 kubelet[2821]: I0421 10:22:05.141954 2821 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:22:05.142033 kubelet[2821]: I0421 10:22:05.141974 2821 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:22:05.148683 kubelet[2821]: I0421 10:22:05.148638 2821 policy_none.go:49] "None policy: Start" Apr 21 10:22:05.148683 kubelet[2821]: I0421 10:22:05.148682 2821 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:22:05.148683 kubelet[2821]: I0421 10:22:05.148698 2821 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:22:05.158112 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 21 10:22:05.178189 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 21 10:22:05.183290 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 21 10:22:05.194693 kubelet[2821]: E0421 10:22:05.194662 2821 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:22:05.195284 kubelet[2821]: I0421 10:22:05.195256 2821 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:22:05.195637 kubelet[2821]: I0421 10:22:05.195278 2821 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:22:05.200437 kubelet[2821]: I0421 10:22:05.199752 2821 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:22:05.201697 kubelet[2821]: E0421 10:22:05.201659 2821 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:22:05.201890 kubelet[2821]: E0421 10:22:05.201715 2821 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-37\" not found" Apr 21 10:22:05.245347 systemd[1]: Created slice kubepods-burstable-pod87c00bb34c0c5fa32ec989f942123f49.slice - libcontainer container kubepods-burstable-pod87c00bb34c0c5fa32ec989f942123f49.slice. Apr 21 10:22:05.258616 kubelet[2821]: E0421 10:22:05.258570 2821 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-37\" not found" node="ip-172-31-24-37" Apr 21 10:22:05.265663 systemd[1]: Created slice kubepods-burstable-pod675e0c1a180ac36d0944d7b4a8e46ebb.slice - libcontainer container kubepods-burstable-pod675e0c1a180ac36d0944d7b4a8e46ebb.slice. Apr 21 10:22:05.268463 kubelet[2821]: E0421 10:22:05.268428 2821 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-37\" not found" node="ip-172-31-24-37" Apr 21 10:22:05.272191 systemd[1]: Created slice kubepods-burstable-podef0d73cb4c6c8885554d887c8eada463.slice - libcontainer container kubepods-burstable-podef0d73cb4c6c8885554d887c8eada463.slice. Apr 21 10:22:05.274154 kubelet[2821]: E0421 10:22:05.274122 2821 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-37\" not found" node="ip-172-31-24-37" Apr 21 10:22:05.297774 kubelet[2821]: I0421 10:22:05.297737 2821 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-37" Apr 21 10:22:05.298168 kubelet[2821]: E0421 10:22:05.298131 2821 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.37:6443/api/v1/nodes\": dial tcp 172.31.24.37:6443: connect: connection refused" node="ip-172-31-24-37" Apr 21 10:22:05.303876 kubelet[2821]: E0421 10:22:05.303817 2821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-37?timeout=10s\": dial tcp 172.31.24.37:6443: connect: connection refused" interval="400ms" Apr 21 10:22:05.306486 kubelet[2821]: I0421 10:22:05.306145 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/675e0c1a180ac36d0944d7b4a8e46ebb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-37\" (UID: \"675e0c1a180ac36d0944d7b4a8e46ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-24-37" Apr 21 10:22:05.306486 kubelet[2821]: I0421 10:22:05.306192 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef0d73cb4c6c8885554d887c8eada463-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-37\" (UID: \"ef0d73cb4c6c8885554d887c8eada463\") " pod="kube-system/kube-scheduler-ip-172-31-24-37" Apr 21 10:22:05.306486 kubelet[2821]: I0421 10:22:05.306224 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87c00bb34c0c5fa32ec989f942123f49-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-37\" (UID: \"87c00bb34c0c5fa32ec989f942123f49\") " pod="kube-system/kube-apiserver-ip-172-31-24-37" Apr 21 10:22:05.306486 kubelet[2821]: I0421 10:22:05.306250 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87c00bb34c0c5fa32ec989f942123f49-ca-certs\") pod \"kube-apiserver-ip-172-31-24-37\" (UID: \"87c00bb34c0c5fa32ec989f942123f49\") " pod="kube-system/kube-apiserver-ip-172-31-24-37" Apr 21 10:22:05.306486 kubelet[2821]: I0421 10:22:05.306273 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87c00bb34c0c5fa32ec989f942123f49-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-37\" (UID: \"87c00bb34c0c5fa32ec989f942123f49\") " pod="kube-system/kube-apiserver-ip-172-31-24-37" Apr 21 10:22:05.306741 kubelet[2821]: I0421 10:22:05.306287 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/675e0c1a180ac36d0944d7b4a8e46ebb-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-37\" (UID: \"675e0c1a180ac36d0944d7b4a8e46ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-24-37" Apr 21 10:22:05.306741 kubelet[2821]: I0421 10:22:05.306301 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/675e0c1a180ac36d0944d7b4a8e46ebb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-37\" (UID: \"675e0c1a180ac36d0944d7b4a8e46ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-24-37" Apr 21 10:22:05.306741 kubelet[2821]: I0421 10:22:05.306316 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/675e0c1a180ac36d0944d7b4a8e46ebb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-37\" (UID: \"675e0c1a180ac36d0944d7b4a8e46ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-24-37" Apr 21 10:22:05.306741 kubelet[2821]: I0421 10:22:05.306442 2821 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/675e0c1a180ac36d0944d7b4a8e46ebb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-37\" (UID: \"675e0c1a180ac36d0944d7b4a8e46ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-24-37" Apr 21 10:22:05.501145 kubelet[2821]: I0421 10:22:05.501002 2821 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-37" Apr 21 10:22:05.501456 kubelet[2821]: E0421 10:22:05.501418 2821 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.37:6443/api/v1/nodes\": dial tcp 172.31.24.37:6443: connect: connection refused" node="ip-172-31-24-37" Apr 21 10:22:05.564341 containerd[1975]: time="2026-04-21T10:22:05.564291942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-37,Uid:87c00bb34c0c5fa32ec989f942123f49,Namespace:kube-system,Attempt:0,}" Apr 21 10:22:05.575393 containerd[1975]: time="2026-04-21T10:22:05.574776659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-37,Uid:675e0c1a180ac36d0944d7b4a8e46ebb,Namespace:kube-system,Attempt:0,}" Apr 21 10:22:05.575632 containerd[1975]: time="2026-04-21T10:22:05.575598888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-37,Uid:ef0d73cb4c6c8885554d887c8eada463,Namespace:kube-system,Attempt:0,}" Apr 21 10:22:05.704617 kubelet[2821]: E0421 10:22:05.704576 2821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-37?timeout=10s\": dial tcp 172.31.24.37:6443: connect: connection refused" interval="800ms" Apr 21 10:22:05.903865 kubelet[2821]: I0421 10:22:05.903621 2821 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-37" Apr 21 10:22:05.904049 kubelet[2821]: E0421 10:22:05.903923 2821 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.37:6443/api/v1/nodes\": dial tcp 172.31.24.37:6443: connect: connection refused" node="ip-172-31-24-37" Apr 21 10:22:05.915966 kubelet[2821]: E0421 10:22:05.915914 2821 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.24.37:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.24.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 21 10:22:06.328892 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3040578915.mount: Deactivated successfully. Apr 21 10:22:06.337451 kubelet[2821]: E0421 10:22:06.337387 2821 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.24.37:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-37&limit=500&resourceVersion=0\": dial tcp 172.31.24.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 21 10:22:06.347959 containerd[1975]: time="2026-04-21T10:22:06.347900459Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:22:06.349817 containerd[1975]: time="2026-04-21T10:22:06.349750644Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Apr 21 10:22:06.352058 containerd[1975]: time="2026-04-21T10:22:06.352005619Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:22:06.354122 containerd[1975]: time="2026-04-21T10:22:06.354077962Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:22:06.356261 containerd[1975]: time="2026-04-21T10:22:06.356210678Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:22:06.358533 containerd[1975]: time="2026-04-21T10:22:06.358475408Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:22:06.360608 containerd[1975]: time="2026-04-21T10:22:06.360477301Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 21 10:22:06.364056 containerd[1975]: time="2026-04-21T10:22:06.364016763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 21 10:22:06.365250 containerd[1975]: time="2026-04-21T10:22:06.364983510Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 790.116066ms" Apr 21 10:22:06.368190 containerd[1975]: time="2026-04-21T10:22:06.368141693Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 792.477724ms" Apr 21 10:22:06.369625 containerd[1975]: time="2026-04-21T10:22:06.369590630Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 805.218337ms" Apr 21 10:22:06.459566 kubelet[2821]: E0421 10:22:06.459091 2821 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.24.37:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.24.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 21 10:22:06.517952 kubelet[2821]: E0421 10:22:06.506173 2821 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.37:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-37?timeout=10s\": dial tcp 172.31.24.37:6443: connect: connection refused" interval="1.6s" Apr 21 10:22:06.567982 containerd[1975]: time="2026-04-21T10:22:06.567849419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:06.567982 containerd[1975]: time="2026-04-21T10:22:06.567925127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:06.567982 containerd[1975]: time="2026-04-21T10:22:06.567947844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:06.568795 containerd[1975]: time="2026-04-21T10:22:06.568059425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:06.575658 containerd[1975]: time="2026-04-21T10:22:06.574959322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:06.575658 containerd[1975]: time="2026-04-21T10:22:06.575043202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:06.575658 containerd[1975]: time="2026-04-21T10:22:06.575077082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:06.575658 containerd[1975]: time="2026-04-21T10:22:06.575175362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:06.583242 containerd[1975]: time="2026-04-21T10:22:06.583016380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:06.583242 containerd[1975]: time="2026-04-21T10:22:06.583096469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:06.584826 containerd[1975]: time="2026-04-21T10:22:06.583133488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:06.584826 containerd[1975]: time="2026-04-21T10:22:06.583257616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:06.603873 kubelet[2821]: E0421 10:22:06.603829 2821 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.24.37:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.24.37:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 21 10:22:06.614948 systemd[1]: Started cri-containerd-800d71dfeb6752c65eb9039c13cb2436a8203e68af01f55d8f2eb23c256a851e.scope - libcontainer container 800d71dfeb6752c65eb9039c13cb2436a8203e68af01f55d8f2eb23c256a851e. Apr 21 10:22:06.625884 systemd[1]: Started cri-containerd-95036358320c158c2ebbcfce7b4ecaf7ebd10d924e362e0d3861283a9f408afc.scope - libcontainer container 95036358320c158c2ebbcfce7b4ecaf7ebd10d924e362e0d3861283a9f408afc. Apr 21 10:22:06.642138 systemd[1]: Started cri-containerd-2c9d1815b8413bbb3793ba23bd78492222d75affd8ebaf401f706c1ffd82dd7a.scope - libcontainer container 2c9d1815b8413bbb3793ba23bd78492222d75affd8ebaf401f706c1ffd82dd7a. Apr 21 10:22:06.704045 containerd[1975]: time="2026-04-21T10:22:06.704004065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-37,Uid:675e0c1a180ac36d0944d7b4a8e46ebb,Namespace:kube-system,Attempt:0,} returns sandbox id \"800d71dfeb6752c65eb9039c13cb2436a8203e68af01f55d8f2eb23c256a851e\"" Apr 21 10:22:06.708356 kubelet[2821]: I0421 10:22:06.708210 2821 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-37" Apr 21 10:22:06.708796 kubelet[2821]: E0421 10:22:06.708587 2821 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.24.37:6443/api/v1/nodes\": dial tcp 172.31.24.37:6443: connect: connection refused" node="ip-172-31-24-37" Apr 21 10:22:06.720053 containerd[1975]: time="2026-04-21T10:22:06.718989921Z" level=info msg="CreateContainer within sandbox \"800d71dfeb6752c65eb9039c13cb2436a8203e68af01f55d8f2eb23c256a851e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 21 10:22:06.746560 containerd[1975]: time="2026-04-21T10:22:06.745688659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-37,Uid:87c00bb34c0c5fa32ec989f942123f49,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c9d1815b8413bbb3793ba23bd78492222d75affd8ebaf401f706c1ffd82dd7a\"" Apr 21 10:22:06.755571 containerd[1975]: time="2026-04-21T10:22:06.755512044Z" level=info msg="CreateContainer within sandbox \"2c9d1815b8413bbb3793ba23bd78492222d75affd8ebaf401f706c1ffd82dd7a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 21 10:22:06.762095 containerd[1975]: time="2026-04-21T10:22:06.761935118Z" level=info msg="CreateContainer within sandbox \"800d71dfeb6752c65eb9039c13cb2436a8203e68af01f55d8f2eb23c256a851e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d8e32c391646a4b02d7eb5f86aefeda3878c8d67e7c496ac94c6e5ba5961fd4f\"" Apr 21 10:22:06.762944 containerd[1975]: time="2026-04-21T10:22:06.762854067Z" level=info msg="StartContainer for \"d8e32c391646a4b02d7eb5f86aefeda3878c8d67e7c496ac94c6e5ba5961fd4f\"" Apr 21 10:22:06.766215 containerd[1975]: time="2026-04-21T10:22:06.766177754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-37,Uid:ef0d73cb4c6c8885554d887c8eada463,Namespace:kube-system,Attempt:0,} returns sandbox id \"95036358320c158c2ebbcfce7b4ecaf7ebd10d924e362e0d3861283a9f408afc\"" Apr 21 10:22:06.773410 containerd[1975]: time="2026-04-21T10:22:06.773369416Z" level=info msg="CreateContainer within sandbox \"95036358320c158c2ebbcfce7b4ecaf7ebd10d924e362e0d3861283a9f408afc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 21 10:22:06.793069 containerd[1975]: time="2026-04-21T10:22:06.792935235Z" level=info msg="CreateContainer within sandbox \"2c9d1815b8413bbb3793ba23bd78492222d75affd8ebaf401f706c1ffd82dd7a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9b3f0c2186c211029f0661c3d44e16395baa72806b932299347ca4b057cec713\"" Apr 21 10:22:06.794231 containerd[1975]: time="2026-04-21T10:22:06.794200280Z" level=info msg="StartContainer for \"9b3f0c2186c211029f0661c3d44e16395baa72806b932299347ca4b057cec713\"" Apr 21 10:22:06.808061 systemd[1]: Started cri-containerd-d8e32c391646a4b02d7eb5f86aefeda3878c8d67e7c496ac94c6e5ba5961fd4f.scope - libcontainer container d8e32c391646a4b02d7eb5f86aefeda3878c8d67e7c496ac94c6e5ba5961fd4f. Apr 21 10:22:06.811188 containerd[1975]: time="2026-04-21T10:22:06.811141131Z" level=info msg="CreateContainer within sandbox \"95036358320c158c2ebbcfce7b4ecaf7ebd10d924e362e0d3861283a9f408afc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cc1917fca05e6ebd3f8b0d10d7f3b4fafc15ef183dcadf571842b6caec6ddc32\"" Apr 21 10:22:06.811767 containerd[1975]: time="2026-04-21T10:22:06.811737862Z" level=info msg="StartContainer for \"cc1917fca05e6ebd3f8b0d10d7f3b4fafc15ef183dcadf571842b6caec6ddc32\"" Apr 21 10:22:06.847945 systemd[1]: Started cri-containerd-9b3f0c2186c211029f0661c3d44e16395baa72806b932299347ca4b057cec713.scope - libcontainer container 9b3f0c2186c211029f0661c3d44e16395baa72806b932299347ca4b057cec713. Apr 21 10:22:06.879577 systemd[1]: Started cri-containerd-cc1917fca05e6ebd3f8b0d10d7f3b4fafc15ef183dcadf571842b6caec6ddc32.scope - libcontainer container cc1917fca05e6ebd3f8b0d10d7f3b4fafc15ef183dcadf571842b6caec6ddc32. Apr 21 10:22:06.901605 containerd[1975]: time="2026-04-21T10:22:06.901556512Z" level=info msg="StartContainer for \"d8e32c391646a4b02d7eb5f86aefeda3878c8d67e7c496ac94c6e5ba5961fd4f\" returns successfully" Apr 21 10:22:06.955347 containerd[1975]: time="2026-04-21T10:22:06.955211174Z" level=info msg="StartContainer for \"9b3f0c2186c211029f0661c3d44e16395baa72806b932299347ca4b057cec713\" returns successfully" Apr 21 10:22:07.007946 containerd[1975]: time="2026-04-21T10:22:07.007894505Z" level=info msg="StartContainer for \"cc1917fca05e6ebd3f8b0d10d7f3b4fafc15ef183dcadf571842b6caec6ddc32\" returns successfully" Apr 21 10:22:07.068232 kubelet[2821]: E0421 10:22:07.068063 2821 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.24.37:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.24.37:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 21 10:22:07.148781 kubelet[2821]: E0421 10:22:07.148676 2821 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-37\" not found" node="ip-172-31-24-37" Apr 21 10:22:07.154164 kubelet[2821]: E0421 10:22:07.154131 2821 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-37\" not found" node="ip-172-31-24-37" Apr 21 10:22:07.158544 kubelet[2821]: E0421 10:22:07.157063 2821 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-37\" not found" node="ip-172-31-24-37" Apr 21 10:22:08.162556 kubelet[2821]: E0421 10:22:08.162095 2821 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-37\" not found" node="ip-172-31-24-37" Apr 21 10:22:08.163721 kubelet[2821]: E0421 10:22:08.163532 2821 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-24-37\" not found" node="ip-172-31-24-37" Apr 21 10:22:08.311785 kubelet[2821]: I0421 10:22:08.311756 2821 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-37" Apr 21 10:22:08.948959 kubelet[2821]: E0421 10:22:08.948926 2821 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-37\" not found" node="ip-172-31-24-37" Apr 21 10:22:09.031384 kubelet[2821]: E0421 10:22:09.031284 2821 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-24-37.18a8581ac9552453 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-37,UID:ip-172-31-24-37,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-37,},FirstTimestamp:2026-04-21 10:22:05.067854931 +0000 UTC m=+0.550361245,LastTimestamp:2026-04-21 10:22:05.067854931 +0000 UTC m=+0.550361245,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-37,}" Apr 21 10:22:09.055196 kubelet[2821]: I0421 10:22:09.054753 2821 apiserver.go:52] "Watching apiserver" Apr 21 10:22:09.099549 kubelet[2821]: I0421 10:22:09.097355 2821 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-37" Apr 21 10:22:09.100538 kubelet[2821]: E0421 10:22:09.100276 2821 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-24-37\": node \"ip-172-31-24-37\" not found" Apr 21 10:22:09.102582 kubelet[2821]: I0421 10:22:09.102390 2821 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-37" Apr 21 10:22:09.105336 kubelet[2821]: I0421 10:22:09.105274 2821 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:22:09.119261 kubelet[2821]: E0421 10:22:09.119218 2821 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-37\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-24-37" Apr 21 10:22:09.119261 kubelet[2821]: I0421 10:22:09.119260 2821 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-37" Apr 21 10:22:09.125463 kubelet[2821]: E0421 10:22:09.125424 2821 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-37\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-24-37" Apr 21 10:22:09.125463 kubelet[2821]: I0421 10:22:09.125461 2821 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-37" Apr 21 10:22:09.129604 kubelet[2821]: E0421 10:22:09.129572 2821 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-24-37\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-24-37" Apr 21 10:22:10.565347 kubelet[2821]: I0421 10:22:10.565270 2821 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-37" Apr 21 10:22:10.992070 systemd[1]: Reloading requested from client PID 3111 ('systemctl') (unit session-7.scope)... Apr 21 10:22:10.992088 systemd[1]: Reloading... Apr 21 10:22:11.150556 zram_generator::config[3154]: No configuration found. Apr 21 10:22:11.285467 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 21 10:22:11.294443 update_engine[1957]: I20260421 10:22:11.293581 1957 update_attempter.cc:509] Updating boot flags... Apr 21 10:22:11.380550 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3214) Apr 21 10:22:11.493786 systemd[1]: Reloading finished in 501 ms. Apr 21 10:22:11.624366 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:22:11.660914 systemd[1]: kubelet.service: Deactivated successfully. Apr 21 10:22:11.661210 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:22:11.670551 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 31 scanned by (udev-worker) (3215) Apr 21 10:22:11.674783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 21 10:22:11.999833 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 21 10:22:12.013029 (kubelet)[3393]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 21 10:22:12.087690 kubelet[3393]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:22:12.087690 kubelet[3393]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 21 10:22:12.087690 kubelet[3393]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 21 10:22:12.087690 kubelet[3393]: I0421 10:22:12.087100 3393 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 21 10:22:12.098118 kubelet[3393]: I0421 10:22:12.098085 3393 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 21 10:22:12.098255 kubelet[3393]: I0421 10:22:12.098246 3393 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 21 10:22:12.098598 kubelet[3393]: I0421 10:22:12.098508 3393 server.go:956] "Client rotation is on, will bootstrap in background" Apr 21 10:22:12.100589 kubelet[3393]: I0421 10:22:12.100507 3393 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 21 10:22:12.108214 kubelet[3393]: I0421 10:22:12.107677 3393 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 21 10:22:12.115226 kubelet[3393]: E0421 10:22:12.114913 3393 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 21 10:22:12.115495 kubelet[3393]: I0421 10:22:12.114955 3393 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 21 10:22:12.122435 kubelet[3393]: I0421 10:22:12.122049 3393 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 21 10:22:12.123452 kubelet[3393]: I0421 10:22:12.123397 3393 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 21 10:22:12.125249 kubelet[3393]: I0421 10:22:12.123638 3393 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-37","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 21 10:22:12.125866 kubelet[3393]: I0421 10:22:12.125463 3393 topology_manager.go:138] "Creating topology manager with none policy" Apr 21 10:22:12.125866 kubelet[3393]: I0421 10:22:12.125492 3393 container_manager_linux.go:303] "Creating device plugin manager" Apr 21 10:22:12.128639 kubelet[3393]: I0421 10:22:12.128614 3393 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:22:12.128965 kubelet[3393]: I0421 10:22:12.128941 3393 kubelet.go:480] "Attempting to sync node with API server" Apr 21 10:22:12.128965 kubelet[3393]: I0421 10:22:12.128962 3393 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 21 10:22:12.129101 kubelet[3393]: I0421 10:22:12.128998 3393 kubelet.go:386] "Adding apiserver pod source" Apr 21 10:22:12.129101 kubelet[3393]: I0421 10:22:12.129019 3393 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 21 10:22:12.134545 kubelet[3393]: I0421 10:22:12.133665 3393 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 21 10:22:12.134545 kubelet[3393]: I0421 10:22:12.134474 3393 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 21 10:22:12.138806 kubelet[3393]: I0421 10:22:12.138780 3393 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 21 10:22:12.138957 kubelet[3393]: I0421 10:22:12.138862 3393 server.go:1289] "Started kubelet" Apr 21 10:22:12.144269 kubelet[3393]: I0421 10:22:12.144248 3393 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 21 10:22:12.155630 kubelet[3393]: I0421 10:22:12.155586 3393 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 21 10:22:12.156928 kubelet[3393]: I0421 10:22:12.156905 3393 server.go:317] "Adding debug handlers to kubelet server" Apr 21 10:22:12.180752 kubelet[3393]: I0421 10:22:12.157111 3393 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 21 10:22:12.180752 kubelet[3393]: I0421 10:22:12.164208 3393 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 21 10:22:12.185016 kubelet[3393]: I0421 10:22:12.167793 3393 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 21 10:22:12.187540 kubelet[3393]: I0421 10:22:12.167808 3393 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 21 10:22:12.187540 kubelet[3393]: E0421 10:22:12.168108 3393 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-24-37\" not found" Apr 21 10:22:12.187540 kubelet[3393]: I0421 10:22:12.186481 3393 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 21 10:22:12.187540 kubelet[3393]: I0421 10:22:12.186804 3393 factory.go:223] Registration of the systemd container factory successfully Apr 21 10:22:12.187540 kubelet[3393]: I0421 10:22:12.187444 3393 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 21 10:22:12.189129 kubelet[3393]: I0421 10:22:12.189111 3393 reconciler.go:26] "Reconciler: start to sync state" Apr 21 10:22:12.198078 kubelet[3393]: E0421 10:22:12.196448 3393 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 21 10:22:12.199333 kubelet[3393]: I0421 10:22:12.198726 3393 factory.go:223] Registration of the containerd container factory successfully Apr 21 10:22:12.231362 kubelet[3393]: I0421 10:22:12.231317 3393 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 21 10:22:12.236707 kubelet[3393]: I0421 10:22:12.236681 3393 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 21 10:22:12.237517 kubelet[3393]: I0421 10:22:12.237172 3393 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 21 10:22:12.237517 kubelet[3393]: I0421 10:22:12.237205 3393 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 21 10:22:12.237517 kubelet[3393]: I0421 10:22:12.237213 3393 kubelet.go:2436] "Starting kubelet main sync loop" Apr 21 10:22:12.237517 kubelet[3393]: E0421 10:22:12.237255 3393 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 21 10:22:12.281467 kubelet[3393]: I0421 10:22:12.281360 3393 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 21 10:22:12.281467 kubelet[3393]: I0421 10:22:12.281381 3393 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 21 10:22:12.281467 kubelet[3393]: I0421 10:22:12.281402 3393 state_mem.go:36] "Initialized new in-memory state store" Apr 21 10:22:12.282764 kubelet[3393]: I0421 10:22:12.281572 3393 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 21 10:22:12.282764 kubelet[3393]: I0421 10:22:12.281586 3393 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 21 10:22:12.282764 kubelet[3393]: I0421 10:22:12.281607 3393 policy_none.go:49] "None policy: Start" Apr 21 10:22:12.282764 kubelet[3393]: I0421 10:22:12.281632 3393 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 21 10:22:12.282764 kubelet[3393]: I0421 10:22:12.281648 3393 state_mem.go:35] "Initializing new in-memory state store" Apr 21 10:22:12.282764 kubelet[3393]: I0421 10:22:12.281785 3393 state_mem.go:75] "Updated machine memory state" Apr 21 10:22:12.294321 kubelet[3393]: E0421 10:22:12.292030 3393 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 21 10:22:12.294321 kubelet[3393]: I0421 10:22:12.292228 3393 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 21 10:22:12.294321 kubelet[3393]: I0421 10:22:12.292247 3393 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 21 10:22:12.294321 kubelet[3393]: I0421 10:22:12.292494 3393 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 21 10:22:12.296994 kubelet[3393]: E0421 10:22:12.296669 3393 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 21 10:22:12.340027 kubelet[3393]: I0421 10:22:12.338483 3393 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-24-37" Apr 21 10:22:12.340027 kubelet[3393]: I0421 10:22:12.338871 3393 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-37" Apr 21 10:22:12.340027 kubelet[3393]: I0421 10:22:12.339561 3393 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-37" Apr 21 10:22:12.351554 kubelet[3393]: E0421 10:22:12.351502 3393 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-37\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-24-37" Apr 21 10:22:12.391261 kubelet[3393]: I0421 10:22:12.391158 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/675e0c1a180ac36d0944d7b4a8e46ebb-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-37\" (UID: \"675e0c1a180ac36d0944d7b4a8e46ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-24-37" Apr 21 10:22:12.391632 kubelet[3393]: I0421 10:22:12.391478 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/675e0c1a180ac36d0944d7b4a8e46ebb-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-37\" (UID: \"675e0c1a180ac36d0944d7b4a8e46ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-24-37" Apr 21 10:22:12.391838 kubelet[3393]: I0421 10:22:12.391730 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/675e0c1a180ac36d0944d7b4a8e46ebb-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-37\" (UID: \"675e0c1a180ac36d0944d7b4a8e46ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-24-37" Apr 21 10:22:12.392110 kubelet[3393]: I0421 10:22:12.391896 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef0d73cb4c6c8885554d887c8eada463-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-37\" (UID: \"ef0d73cb4c6c8885554d887c8eada463\") " pod="kube-system/kube-scheduler-ip-172-31-24-37" Apr 21 10:22:12.392110 kubelet[3393]: I0421 10:22:12.392056 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87c00bb34c0c5fa32ec989f942123f49-ca-certs\") pod \"kube-apiserver-ip-172-31-24-37\" (UID: \"87c00bb34c0c5fa32ec989f942123f49\") " pod="kube-system/kube-apiserver-ip-172-31-24-37" Apr 21 10:22:12.392110 kubelet[3393]: I0421 10:22:12.392078 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/675e0c1a180ac36d0944d7b4a8e46ebb-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-37\" (UID: \"675e0c1a180ac36d0944d7b4a8e46ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-24-37" Apr 21 10:22:12.392308 kubelet[3393]: I0421 10:22:12.392203 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/675e0c1a180ac36d0944d7b4a8e46ebb-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-37\" (UID: \"675e0c1a180ac36d0944d7b4a8e46ebb\") " pod="kube-system/kube-controller-manager-ip-172-31-24-37" Apr 21 10:22:12.392308 kubelet[3393]: I0421 10:22:12.392221 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87c00bb34c0c5fa32ec989f942123f49-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-37\" (UID: \"87c00bb34c0c5fa32ec989f942123f49\") " pod="kube-system/kube-apiserver-ip-172-31-24-37" Apr 21 10:22:12.392308 kubelet[3393]: I0421 10:22:12.392245 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87c00bb34c0c5fa32ec989f942123f49-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-37\" (UID: \"87c00bb34c0c5fa32ec989f942123f49\") " pod="kube-system/kube-apiserver-ip-172-31-24-37" Apr 21 10:22:12.406561 kubelet[3393]: I0421 10:22:12.406363 3393 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-24-37" Apr 21 10:22:12.420768 kubelet[3393]: I0421 10:22:12.420731 3393 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-24-37" Apr 21 10:22:12.421122 kubelet[3393]: I0421 10:22:12.420827 3393 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-24-37" Apr 21 10:22:13.148389 kubelet[3393]: I0421 10:22:13.148338 3393 apiserver.go:52] "Watching apiserver" Apr 21 10:22:13.188335 kubelet[3393]: I0421 10:22:13.188266 3393 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 21 10:22:13.258770 kubelet[3393]: I0421 10:22:13.258739 3393 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-24-37" Apr 21 10:22:13.260586 kubelet[3393]: I0421 10:22:13.260368 3393 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-24-37" Apr 21 10:22:13.272549 kubelet[3393]: E0421 10:22:13.272084 3393 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-24-37\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-24-37" Apr 21 10:22:13.272710 kubelet[3393]: E0421 10:22:13.272664 3393 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-24-37\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-37" Apr 21 10:22:13.301273 kubelet[3393]: I0421 10:22:13.301194 3393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-37" podStartSLOduration=1.301176782 podStartE2EDuration="1.301176782s" podCreationTimestamp="2026-04-21 10:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:22:13.291257308 +0000 UTC m=+1.268212403" watchObservedRunningTime="2026-04-21 10:22:13.301176782 +0000 UTC m=+1.278131867" Apr 21 10:22:13.313086 kubelet[3393]: I0421 10:22:13.313017 3393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-37" podStartSLOduration=1.312995904 podStartE2EDuration="1.312995904s" podCreationTimestamp="2026-04-21 10:22:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:22:13.302240444 +0000 UTC m=+1.279195538" watchObservedRunningTime="2026-04-21 10:22:13.312995904 +0000 UTC m=+1.289951011" Apr 21 10:22:13.313302 kubelet[3393]: I0421 10:22:13.313123 3393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-37" podStartSLOduration=3.3131150050000002 podStartE2EDuration="3.313115005s" podCreationTimestamp="2026-04-21 10:22:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:22:13.312983614 +0000 UTC m=+1.289938708" watchObservedRunningTime="2026-04-21 10:22:13.313115005 +0000 UTC m=+1.290070100" Apr 21 10:22:15.536727 kubelet[3393]: I0421 10:22:15.536690 3393 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 21 10:22:15.538256 kubelet[3393]: I0421 10:22:15.537354 3393 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 21 10:22:15.538459 containerd[1975]: time="2026-04-21T10:22:15.537140312Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 21 10:22:16.464707 systemd[1]: Created slice kubepods-besteffort-podf5b0a5a1_1395_4035_af8e_49c540884faf.slice - libcontainer container kubepods-besteffort-podf5b0a5a1_1395_4035_af8e_49c540884faf.slice. Apr 21 10:22:16.523335 kubelet[3393]: I0421 10:22:16.523100 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f5b0a5a1-1395-4035-af8e-49c540884faf-kube-proxy\") pod \"kube-proxy-95hkx\" (UID: \"f5b0a5a1-1395-4035-af8e-49c540884faf\") " pod="kube-system/kube-proxy-95hkx" Apr 21 10:22:16.523335 kubelet[3393]: I0421 10:22:16.523172 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5b0a5a1-1395-4035-af8e-49c540884faf-lib-modules\") pod \"kube-proxy-95hkx\" (UID: \"f5b0a5a1-1395-4035-af8e-49c540884faf\") " pod="kube-system/kube-proxy-95hkx" Apr 21 10:22:16.523335 kubelet[3393]: I0421 10:22:16.523202 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5b0a5a1-1395-4035-af8e-49c540884faf-xtables-lock\") pod \"kube-proxy-95hkx\" (UID: \"f5b0a5a1-1395-4035-af8e-49c540884faf\") " pod="kube-system/kube-proxy-95hkx" Apr 21 10:22:16.523335 kubelet[3393]: I0421 10:22:16.523223 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s74dh\" (UniqueName: \"kubernetes.io/projected/f5b0a5a1-1395-4035-af8e-49c540884faf-kube-api-access-s74dh\") pod \"kube-proxy-95hkx\" (UID: \"f5b0a5a1-1395-4035-af8e-49c540884faf\") " pod="kube-system/kube-proxy-95hkx" Apr 21 10:22:16.707194 systemd[1]: Created slice kubepods-besteffort-pod27be1564_1265_4163_b9b9_005b246657b7.slice - libcontainer container kubepods-besteffort-pod27be1564_1265_4163_b9b9_005b246657b7.slice. Apr 21 10:22:16.724421 kubelet[3393]: I0421 10:22:16.724269 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/27be1564-1265-4163-b9b9-005b246657b7-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-8pz2f\" (UID: \"27be1564-1265-4163-b9b9-005b246657b7\") " pod="tigera-operator/tigera-operator-6bf85f8dd-8pz2f" Apr 21 10:22:16.724421 kubelet[3393]: I0421 10:22:16.724345 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qfsx\" (UniqueName: \"kubernetes.io/projected/27be1564-1265-4163-b9b9-005b246657b7-kube-api-access-4qfsx\") pod \"tigera-operator-6bf85f8dd-8pz2f\" (UID: \"27be1564-1265-4163-b9b9-005b246657b7\") " pod="tigera-operator/tigera-operator-6bf85f8dd-8pz2f" Apr 21 10:22:16.778005 containerd[1975]: time="2026-04-21T10:22:16.777960944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-95hkx,Uid:f5b0a5a1-1395-4035-af8e-49c540884faf,Namespace:kube-system,Attempt:0,}" Apr 21 10:22:16.816501 containerd[1975]: time="2026-04-21T10:22:16.816090921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:16.816501 containerd[1975]: time="2026-04-21T10:22:16.816184507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:16.816501 containerd[1975]: time="2026-04-21T10:22:16.816206838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:16.816501 containerd[1975]: time="2026-04-21T10:22:16.816344810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:16.855759 systemd[1]: Started cri-containerd-970c9673211c59fefb1d341071a298987fffbe38fcad42086ce9de301578a87b.scope - libcontainer container 970c9673211c59fefb1d341071a298987fffbe38fcad42086ce9de301578a87b. Apr 21 10:22:16.882462 containerd[1975]: time="2026-04-21T10:22:16.881914592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-95hkx,Uid:f5b0a5a1-1395-4035-af8e-49c540884faf,Namespace:kube-system,Attempt:0,} returns sandbox id \"970c9673211c59fefb1d341071a298987fffbe38fcad42086ce9de301578a87b\"" Apr 21 10:22:16.892235 containerd[1975]: time="2026-04-21T10:22:16.892190482Z" level=info msg="CreateContainer within sandbox \"970c9673211c59fefb1d341071a298987fffbe38fcad42086ce9de301578a87b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 21 10:22:16.919220 containerd[1975]: time="2026-04-21T10:22:16.919167496Z" level=info msg="CreateContainer within sandbox \"970c9673211c59fefb1d341071a298987fffbe38fcad42086ce9de301578a87b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1a89667e4a65266fae020ff32c50d3f63379c2ccb910f7e9c4fa67561c09680e\"" Apr 21 10:22:16.921272 containerd[1975]: time="2026-04-21T10:22:16.920358631Z" level=info msg="StartContainer for \"1a89667e4a65266fae020ff32c50d3f63379c2ccb910f7e9c4fa67561c09680e\"" Apr 21 10:22:16.951033 systemd[1]: Started cri-containerd-1a89667e4a65266fae020ff32c50d3f63379c2ccb910f7e9c4fa67561c09680e.scope - libcontainer container 1a89667e4a65266fae020ff32c50d3f63379c2ccb910f7e9c4fa67561c09680e. Apr 21 10:22:16.986749 containerd[1975]: time="2026-04-21T10:22:16.986415051Z" level=info msg="StartContainer for \"1a89667e4a65266fae020ff32c50d3f63379c2ccb910f7e9c4fa67561c09680e\" returns successfully" Apr 21 10:22:17.012857 containerd[1975]: time="2026-04-21T10:22:17.012394771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-8pz2f,Uid:27be1564-1265-4163-b9b9-005b246657b7,Namespace:tigera-operator,Attempt:0,}" Apr 21 10:22:17.054730 containerd[1975]: time="2026-04-21T10:22:17.054595617Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:17.054730 containerd[1975]: time="2026-04-21T10:22:17.054691863Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:17.055074 containerd[1975]: time="2026-04-21T10:22:17.054715159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:17.055074 containerd[1975]: time="2026-04-21T10:22:17.054964261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:17.080789 systemd[1]: Started cri-containerd-af358ea30b78c727e3413ecb74b1f8e178403819563b5a3ec3e6353d8f50de92.scope - libcontainer container af358ea30b78c727e3413ecb74b1f8e178403819563b5a3ec3e6353d8f50de92. Apr 21 10:22:17.159348 containerd[1975]: time="2026-04-21T10:22:17.158934176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-8pz2f,Uid:27be1564-1265-4163-b9b9-005b246657b7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"af358ea30b78c727e3413ecb74b1f8e178403819563b5a3ec3e6353d8f50de92\"" Apr 21 10:22:17.162802 containerd[1975]: time="2026-04-21T10:22:17.162311149Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 21 10:22:17.472323 kubelet[3393]: I0421 10:22:17.472255 3393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-95hkx" podStartSLOduration=1.4722341239999999 podStartE2EDuration="1.472234124s" podCreationTimestamp="2026-04-21 10:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:22:17.284953039 +0000 UTC m=+5.261908134" watchObservedRunningTime="2026-04-21 10:22:17.472234124 +0000 UTC m=+5.449189219" Apr 21 10:22:18.517418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3459104634.mount: Deactivated successfully. Apr 21 10:22:20.299913 containerd[1975]: time="2026-04-21T10:22:20.299829340Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:20.302001 containerd[1975]: time="2026-04-21T10:22:20.301849825Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 21 10:22:20.304511 containerd[1975]: time="2026-04-21T10:22:20.304117364Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:20.308870 containerd[1975]: time="2026-04-21T10:22:20.308826301Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:20.309676 containerd[1975]: time="2026-04-21T10:22:20.309617340Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.147264367s" Apr 21 10:22:20.309676 containerd[1975]: time="2026-04-21T10:22:20.309661275Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 21 10:22:20.316363 containerd[1975]: time="2026-04-21T10:22:20.316316193Z" level=info msg="CreateContainer within sandbox \"af358ea30b78c727e3413ecb74b1f8e178403819563b5a3ec3e6353d8f50de92\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 21 10:22:20.345693 containerd[1975]: time="2026-04-21T10:22:20.345650609Z" level=info msg="CreateContainer within sandbox \"af358ea30b78c727e3413ecb74b1f8e178403819563b5a3ec3e6353d8f50de92\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"041240d186a74ab584da9cc8bbbbd2a548699c6d9b9c663d60a0cb0ed6bc4c59\"" Apr 21 10:22:20.346255 containerd[1975]: time="2026-04-21T10:22:20.346220116Z" level=info msg="StartContainer for \"041240d186a74ab584da9cc8bbbbd2a548699c6d9b9c663d60a0cb0ed6bc4c59\"" Apr 21 10:22:20.380644 systemd[1]: run-containerd-runc-k8s.io-041240d186a74ab584da9cc8bbbbd2a548699c6d9b9c663d60a0cb0ed6bc4c59-runc.kn6QEX.mount: Deactivated successfully. Apr 21 10:22:20.390895 systemd[1]: Started cri-containerd-041240d186a74ab584da9cc8bbbbd2a548699c6d9b9c663d60a0cb0ed6bc4c59.scope - libcontainer container 041240d186a74ab584da9cc8bbbbd2a548699c6d9b9c663d60a0cb0ed6bc4c59. Apr 21 10:22:20.422351 containerd[1975]: time="2026-04-21T10:22:20.422306704Z" level=info msg="StartContainer for \"041240d186a74ab584da9cc8bbbbd2a548699c6d9b9c663d60a0cb0ed6bc4c59\" returns successfully" Apr 21 10:22:22.933727 kubelet[3393]: I0421 10:22:22.933587 3393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-8pz2f" podStartSLOduration=3.784350153 podStartE2EDuration="6.933565958s" podCreationTimestamp="2026-04-21 10:22:16 +0000 UTC" firstStartedPulling="2026-04-21 10:22:17.161941964 +0000 UTC m=+5.138897048" lastFinishedPulling="2026-04-21 10:22:20.31115778 +0000 UTC m=+8.288112853" observedRunningTime="2026-04-21 10:22:21.289291336 +0000 UTC m=+9.266246431" watchObservedRunningTime="2026-04-21 10:22:22.933565958 +0000 UTC m=+10.910521068" Apr 21 10:22:27.657627 sudo[2309]: pam_unix(sudo:session): session closed for user root Apr 21 10:22:27.821270 sshd[2306]: pam_unix(sshd:session): session closed for user core Apr 21 10:22:27.829364 systemd-logind[1955]: Session 7 logged out. Waiting for processes to exit. Apr 21 10:22:27.830760 systemd[1]: sshd@6-172.31.24.37:22-50.85.169.122:39452.service: Deactivated successfully. Apr 21 10:22:27.843127 systemd[1]: session-7.scope: Deactivated successfully. Apr 21 10:22:27.845293 systemd[1]: session-7.scope: Consumed 7.196s CPU time, 144.0M memory peak, 0B memory swap peak. Apr 21 10:22:27.848204 systemd-logind[1955]: Removed session 7. Apr 21 10:22:28.860931 systemd[1]: Created slice kubepods-besteffort-pod61f89458_e21e_47bb_b688_9c43fcbaaec1.slice - libcontainer container kubepods-besteffort-pod61f89458_e21e_47bb_b688_9c43fcbaaec1.slice. Apr 21 10:22:28.919583 kubelet[3393]: I0421 10:22:28.919518 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g57fz\" (UniqueName: \"kubernetes.io/projected/61f89458-e21e-47bb-b688-9c43fcbaaec1-kube-api-access-g57fz\") pod \"calico-typha-588df96b4b-jrhxr\" (UID: \"61f89458-e21e-47bb-b688-9c43fcbaaec1\") " pod="calico-system/calico-typha-588df96b4b-jrhxr" Apr 21 10:22:28.920071 kubelet[3393]: I0421 10:22:28.919617 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/61f89458-e21e-47bb-b688-9c43fcbaaec1-typha-certs\") pod \"calico-typha-588df96b4b-jrhxr\" (UID: \"61f89458-e21e-47bb-b688-9c43fcbaaec1\") " pod="calico-system/calico-typha-588df96b4b-jrhxr" Apr 21 10:22:28.920071 kubelet[3393]: I0421 10:22:28.919647 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/61f89458-e21e-47bb-b688-9c43fcbaaec1-tigera-ca-bundle\") pod \"calico-typha-588df96b4b-jrhxr\" (UID: \"61f89458-e21e-47bb-b688-9c43fcbaaec1\") " pod="calico-system/calico-typha-588df96b4b-jrhxr" Apr 21 10:22:28.972218 systemd[1]: Created slice kubepods-besteffort-podd9719bc3_8fdc_4505_97f1_2029c7d108af.slice - libcontainer container kubepods-besteffort-podd9719bc3_8fdc_4505_97f1_2029c7d108af.slice. Apr 21 10:22:29.020291 kubelet[3393]: I0421 10:22:29.020244 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d9719bc3-8fdc-4505-97f1-2029c7d108af-flexvol-driver-host\") pod \"calico-node-fnxlr\" (UID: \"d9719bc3-8fdc-4505-97f1-2029c7d108af\") " pod="calico-system/calico-node-fnxlr" Apr 21 10:22:29.020684 kubelet[3393]: I0421 10:22:29.020563 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/d9719bc3-8fdc-4505-97f1-2029c7d108af-bpffs\") pod \"calico-node-fnxlr\" (UID: \"d9719bc3-8fdc-4505-97f1-2029c7d108af\") " pod="calico-system/calico-node-fnxlr" Apr 21 10:22:29.020684 kubelet[3393]: I0421 10:22:29.020636 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/d9719bc3-8fdc-4505-97f1-2029c7d108af-nodeproc\") pod \"calico-node-fnxlr\" (UID: \"d9719bc3-8fdc-4505-97f1-2029c7d108af\") " pod="calico-system/calico-node-fnxlr" Apr 21 10:22:29.020989 kubelet[3393]: I0421 10:22:29.020668 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d9719bc3-8fdc-4505-97f1-2029c7d108af-node-certs\") pod \"calico-node-fnxlr\" (UID: \"d9719bc3-8fdc-4505-97f1-2029c7d108af\") " pod="calico-system/calico-node-fnxlr" Apr 21 10:22:29.020989 kubelet[3393]: I0421 10:22:29.020864 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d9719bc3-8fdc-4505-97f1-2029c7d108af-tigera-ca-bundle\") pod \"calico-node-fnxlr\" (UID: \"d9719bc3-8fdc-4505-97f1-2029c7d108af\") " pod="calico-system/calico-node-fnxlr" Apr 21 10:22:29.020989 kubelet[3393]: I0421 10:22:29.020908 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9719bc3-8fdc-4505-97f1-2029c7d108af-xtables-lock\") pod \"calico-node-fnxlr\" (UID: \"d9719bc3-8fdc-4505-97f1-2029c7d108af\") " pod="calico-system/calico-node-fnxlr" Apr 21 10:22:29.020989 kubelet[3393]: I0421 10:22:29.020939 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d9719bc3-8fdc-4505-97f1-2029c7d108af-var-lib-calico\") pod \"calico-node-fnxlr\" (UID: \"d9719bc3-8fdc-4505-97f1-2029c7d108af\") " pod="calico-system/calico-node-fnxlr" Apr 21 10:22:29.021667 kubelet[3393]: I0421 10:22:29.020965 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d9719bc3-8fdc-4505-97f1-2029c7d108af-cni-log-dir\") pod \"calico-node-fnxlr\" (UID: \"d9719bc3-8fdc-4505-97f1-2029c7d108af\") " pod="calico-system/calico-node-fnxlr" Apr 21 10:22:29.021667 kubelet[3393]: I0421 10:22:29.021236 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9719bc3-8fdc-4505-97f1-2029c7d108af-lib-modules\") pod \"calico-node-fnxlr\" (UID: \"d9719bc3-8fdc-4505-97f1-2029c7d108af\") " pod="calico-system/calico-node-fnxlr" Apr 21 10:22:29.021667 kubelet[3393]: I0421 10:22:29.021621 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d9719bc3-8fdc-4505-97f1-2029c7d108af-var-run-calico\") pod \"calico-node-fnxlr\" (UID: \"d9719bc3-8fdc-4505-97f1-2029c7d108af\") " pod="calico-system/calico-node-fnxlr" Apr 21 10:22:29.023033 kubelet[3393]: I0421 10:22:29.022174 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d9719bc3-8fdc-4505-97f1-2029c7d108af-cni-net-dir\") pod \"calico-node-fnxlr\" (UID: \"d9719bc3-8fdc-4505-97f1-2029c7d108af\") " pod="calico-system/calico-node-fnxlr" Apr 21 10:22:29.024111 kubelet[3393]: I0421 10:22:29.024055 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d9719bc3-8fdc-4505-97f1-2029c7d108af-cni-bin-dir\") pod \"calico-node-fnxlr\" (UID: \"d9719bc3-8fdc-4505-97f1-2029c7d108af\") " pod="calico-system/calico-node-fnxlr" Apr 21 10:22:29.024286 kubelet[3393]: I0421 10:22:29.024087 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/d9719bc3-8fdc-4505-97f1-2029c7d108af-sys-fs\") pod \"calico-node-fnxlr\" (UID: \"d9719bc3-8fdc-4505-97f1-2029c7d108af\") " pod="calico-system/calico-node-fnxlr" Apr 21 10:22:29.024286 kubelet[3393]: I0421 10:22:29.024212 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgwzr\" (UniqueName: \"kubernetes.io/projected/d9719bc3-8fdc-4505-97f1-2029c7d108af-kube-api-access-bgwzr\") pod \"calico-node-fnxlr\" (UID: \"d9719bc3-8fdc-4505-97f1-2029c7d108af\") " pod="calico-system/calico-node-fnxlr" Apr 21 10:22:29.024553 kubelet[3393]: I0421 10:22:29.024403 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d9719bc3-8fdc-4505-97f1-2029c7d108af-policysync\") pod \"calico-node-fnxlr\" (UID: \"d9719bc3-8fdc-4505-97f1-2029c7d108af\") " pod="calico-system/calico-node-fnxlr" Apr 21 10:22:29.138939 kubelet[3393]: E0421 10:22:29.138724 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kwvj" podUID="0da02c82-49c9-40d9-881a-313b594008da" Apr 21 10:22:29.142497 kubelet[3393]: E0421 10:22:29.142456 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.142497 kubelet[3393]: W0421 10:22:29.142487 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.142688 kubelet[3393]: E0421 10:22:29.142516 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.181932 kubelet[3393]: E0421 10:22:29.181823 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.181932 kubelet[3393]: W0421 10:22:29.181850 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.181932 kubelet[3393]: E0421 10:22:29.181871 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.186555 containerd[1975]: time="2026-04-21T10:22:29.186492381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-588df96b4b-jrhxr,Uid:61f89458-e21e-47bb-b688-9c43fcbaaec1,Namespace:calico-system,Attempt:0,}" Apr 21 10:22:29.208597 kubelet[3393]: E0421 10:22:29.208559 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.208597 kubelet[3393]: W0421 10:22:29.208595 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.208952 kubelet[3393]: E0421 10:22:29.208623 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.208952 kubelet[3393]: E0421 10:22:29.208898 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.208952 kubelet[3393]: W0421 10:22:29.208913 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.208952 kubelet[3393]: E0421 10:22:29.208929 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.210904 kubelet[3393]: E0421 10:22:29.210782 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.210904 kubelet[3393]: W0421 10:22:29.210903 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.211110 kubelet[3393]: E0421 10:22:29.210922 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.216872 kubelet[3393]: E0421 10:22:29.216622 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.216872 kubelet[3393]: W0421 10:22:29.216647 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.216872 kubelet[3393]: E0421 10:22:29.216692 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.219326 kubelet[3393]: E0421 10:22:29.218338 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.219326 kubelet[3393]: W0421 10:22:29.218364 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.219326 kubelet[3393]: E0421 10:22:29.218397 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.230535 kubelet[3393]: E0421 10:22:29.229811 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.230535 kubelet[3393]: W0421 10:22:29.229842 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.230535 kubelet[3393]: E0421 10:22:29.229875 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.232250 kubelet[3393]: E0421 10:22:29.231370 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.232250 kubelet[3393]: W0421 10:22:29.231394 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.232250 kubelet[3393]: E0421 10:22:29.231427 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.236338 kubelet[3393]: E0421 10:22:29.236309 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.236498 kubelet[3393]: W0421 10:22:29.236478 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.236618 kubelet[3393]: E0421 10:22:29.236603 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.237498 kubelet[3393]: E0421 10:22:29.237362 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.238094 kubelet[3393]: W0421 10:22:29.237824 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.238641 kubelet[3393]: E0421 10:22:29.237866 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.241924 kubelet[3393]: E0421 10:22:29.241703 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.241924 kubelet[3393]: W0421 10:22:29.241730 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.241924 kubelet[3393]: E0421 10:22:29.241766 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.243170 kubelet[3393]: E0421 10:22:29.242937 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.243170 kubelet[3393]: W0421 10:22:29.242959 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.243170 kubelet[3393]: E0421 10:22:29.242982 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.246725 kubelet[3393]: E0421 10:22:29.245157 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.246725 kubelet[3393]: W0421 10:22:29.245172 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.246725 kubelet[3393]: E0421 10:22:29.245192 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.248340 kubelet[3393]: E0421 10:22:29.247771 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.248340 kubelet[3393]: W0421 10:22:29.247790 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.248340 kubelet[3393]: E0421 10:22:29.247809 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.248340 kubelet[3393]: E0421 10:22:29.248310 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.248340 kubelet[3393]: W0421 10:22:29.248321 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.248340 kubelet[3393]: E0421 10:22:29.248337 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.250237 kubelet[3393]: E0421 10:22:29.248604 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.250237 kubelet[3393]: W0421 10:22:29.248614 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.250237 kubelet[3393]: E0421 10:22:29.248627 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.250237 kubelet[3393]: E0421 10:22:29.248854 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.250237 kubelet[3393]: W0421 10:22:29.248864 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.250237 kubelet[3393]: E0421 10:22:29.248876 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.250809 kubelet[3393]: E0421 10:22:29.250788 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.250809 kubelet[3393]: W0421 10:22:29.250807 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.250955 kubelet[3393]: E0421 10:22:29.250825 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.251061 kubelet[3393]: E0421 10:22:29.251046 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.251124 kubelet[3393]: W0421 10:22:29.251061 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.251124 kubelet[3393]: E0421 10:22:29.251074 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.251786 kubelet[3393]: E0421 10:22:29.251281 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.251786 kubelet[3393]: W0421 10:22:29.251293 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.251786 kubelet[3393]: E0421 10:22:29.251304 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.251786 kubelet[3393]: E0421 10:22:29.251508 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.251786 kubelet[3393]: W0421 10:22:29.251516 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.251786 kubelet[3393]: E0421 10:22:29.251546 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.253547 kubelet[3393]: E0421 10:22:29.252897 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.253547 kubelet[3393]: W0421 10:22:29.252913 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.253547 kubelet[3393]: E0421 10:22:29.252927 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.253547 kubelet[3393]: I0421 10:22:29.252976 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0da02c82-49c9-40d9-881a-313b594008da-varrun\") pod \"csi-node-driver-5kwvj\" (UID: \"0da02c82-49c9-40d9-881a-313b594008da\") " pod="calico-system/csi-node-driver-5kwvj" Apr 21 10:22:29.253547 kubelet[3393]: E0421 10:22:29.253267 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.253547 kubelet[3393]: W0421 10:22:29.253277 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.253547 kubelet[3393]: E0421 10:22:29.253290 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.257826 kubelet[3393]: E0421 10:22:29.254594 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.257826 kubelet[3393]: W0421 10:22:29.254623 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.257826 kubelet[3393]: E0421 10:22:29.254638 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.257826 kubelet[3393]: I0421 10:22:29.254676 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0da02c82-49c9-40d9-881a-313b594008da-kubelet-dir\") pod \"csi-node-driver-5kwvj\" (UID: \"0da02c82-49c9-40d9-881a-313b594008da\") " pod="calico-system/csi-node-driver-5kwvj" Apr 21 10:22:29.257826 kubelet[3393]: E0421 10:22:29.254927 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.257826 kubelet[3393]: W0421 10:22:29.254938 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.257826 kubelet[3393]: E0421 10:22:29.254950 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.257826 kubelet[3393]: E0421 10:22:29.255461 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.257826 kubelet[3393]: W0421 10:22:29.255472 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.258321 kubelet[3393]: E0421 10:22:29.255488 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.258321 kubelet[3393]: E0421 10:22:29.257761 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.258321 kubelet[3393]: W0421 10:22:29.257777 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.258321 kubelet[3393]: E0421 10:22:29.257792 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.258321 kubelet[3393]: E0421 10:22:29.258081 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.258321 kubelet[3393]: W0421 10:22:29.258091 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.258321 kubelet[3393]: E0421 10:22:29.258104 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.258321 kubelet[3393]: I0421 10:22:29.258138 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0da02c82-49c9-40d9-881a-313b594008da-registration-dir\") pod \"csi-node-driver-5kwvj\" (UID: \"0da02c82-49c9-40d9-881a-313b594008da\") " pod="calico-system/csi-node-driver-5kwvj" Apr 21 10:22:29.258692 kubelet[3393]: E0421 10:22:29.258392 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.258692 kubelet[3393]: W0421 10:22:29.258404 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.258692 kubelet[3393]: E0421 10:22:29.258431 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.258692 kubelet[3393]: I0421 10:22:29.258468 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0da02c82-49c9-40d9-881a-313b594008da-socket-dir\") pod \"csi-node-driver-5kwvj\" (UID: \"0da02c82-49c9-40d9-881a-313b594008da\") " pod="calico-system/csi-node-driver-5kwvj" Apr 21 10:22:29.258862 kubelet[3393]: E0421 10:22:29.258816 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.258862 kubelet[3393]: W0421 10:22:29.258828 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.258862 kubelet[3393]: E0421 10:22:29.258840 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.258986 kubelet[3393]: I0421 10:22:29.258875 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2djfd\" (UniqueName: \"kubernetes.io/projected/0da02c82-49c9-40d9-881a-313b594008da-kube-api-access-2djfd\") pod \"csi-node-driver-5kwvj\" (UID: \"0da02c82-49c9-40d9-881a-313b594008da\") " pod="calico-system/csi-node-driver-5kwvj" Apr 21 10:22:29.259189 kubelet[3393]: E0421 10:22:29.259172 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.259246 kubelet[3393]: W0421 10:22:29.259188 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.259246 kubelet[3393]: E0421 10:22:29.259201 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.261823 kubelet[3393]: E0421 10:22:29.261617 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.261823 kubelet[3393]: W0421 10:22:29.261639 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.261823 kubelet[3393]: E0421 10:22:29.261655 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.262036 kubelet[3393]: E0421 10:22:29.261972 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.262036 kubelet[3393]: W0421 10:22:29.261982 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.262036 kubelet[3393]: E0421 10:22:29.261996 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.266547 kubelet[3393]: E0421 10:22:29.262234 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.266547 kubelet[3393]: W0421 10:22:29.262246 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.266547 kubelet[3393]: E0421 10:22:29.262258 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.266547 kubelet[3393]: E0421 10:22:29.262518 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.266547 kubelet[3393]: W0421 10:22:29.262590 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.266547 kubelet[3393]: E0421 10:22:29.262603 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.266547 kubelet[3393]: E0421 10:22:29.262833 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.266547 kubelet[3393]: W0421 10:22:29.262842 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.266547 kubelet[3393]: E0421 10:22:29.262853 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.279722 containerd[1975]: time="2026-04-21T10:22:29.279677408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fnxlr,Uid:d9719bc3-8fdc-4505-97f1-2029c7d108af,Namespace:calico-system,Attempt:0,}" Apr 21 10:22:29.333328 containerd[1975]: time="2026-04-21T10:22:29.333235364Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:29.333328 containerd[1975]: time="2026-04-21T10:22:29.333296930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:29.333566 containerd[1975]: time="2026-04-21T10:22:29.333312413Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:29.333950 containerd[1975]: time="2026-04-21T10:22:29.333717727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:29.361572 kubelet[3393]: E0421 10:22:29.360580 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.361572 kubelet[3393]: W0421 10:22:29.360609 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.361572 kubelet[3393]: E0421 10:22:29.360635 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.367275 kubelet[3393]: E0421 10:22:29.365298 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.367275 kubelet[3393]: W0421 10:22:29.365326 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.367275 kubelet[3393]: E0421 10:22:29.365350 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.368292 kubelet[3393]: E0421 10:22:29.368153 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.368899 kubelet[3393]: W0421 10:22:29.368438 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.369213 kubelet[3393]: E0421 10:22:29.369010 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.370319 kubelet[3393]: E0421 10:22:29.369586 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.370644 kubelet[3393]: W0421 10:22:29.370431 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.370644 kubelet[3393]: E0421 10:22:29.370463 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.371477 kubelet[3393]: E0421 10:22:29.371126 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.371477 kubelet[3393]: W0421 10:22:29.371142 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.371477 kubelet[3393]: E0421 10:22:29.371159 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.376928 kubelet[3393]: E0421 10:22:29.374485 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.376928 kubelet[3393]: W0421 10:22:29.374502 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.376928 kubelet[3393]: E0421 10:22:29.374549 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.376928 kubelet[3393]: E0421 10:22:29.375131 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.376928 kubelet[3393]: W0421 10:22:29.375174 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.376928 kubelet[3393]: E0421 10:22:29.375190 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.376928 kubelet[3393]: E0421 10:22:29.375397 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.376928 kubelet[3393]: W0421 10:22:29.375408 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.376928 kubelet[3393]: E0421 10:22:29.375418 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.376928 kubelet[3393]: E0421 10:22:29.375677 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.377889 kubelet[3393]: W0421 10:22:29.375686 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.377889 kubelet[3393]: E0421 10:22:29.375698 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.377889 kubelet[3393]: E0421 10:22:29.375959 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.377889 kubelet[3393]: W0421 10:22:29.375969 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.377889 kubelet[3393]: E0421 10:22:29.375982 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.377889 kubelet[3393]: E0421 10:22:29.376711 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.377889 kubelet[3393]: W0421 10:22:29.376724 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.377889 kubelet[3393]: E0421 10:22:29.376739 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.379642 kubelet[3393]: E0421 10:22:29.379624 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.379750 kubelet[3393]: W0421 10:22:29.379734 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.379826 kubelet[3393]: E0421 10:22:29.379815 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.380631 kubelet[3393]: E0421 10:22:29.380610 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.380745 kubelet[3393]: W0421 10:22:29.380730 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.380827 kubelet[3393]: E0421 10:22:29.380814 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.382294 kubelet[3393]: E0421 10:22:29.382278 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.382394 kubelet[3393]: W0421 10:22:29.382380 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.382488 kubelet[3393]: E0421 10:22:29.382475 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.382814 kubelet[3393]: E0421 10:22:29.382800 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.382900 kubelet[3393]: W0421 10:22:29.382889 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.382987 kubelet[3393]: E0421 10:22:29.382976 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.384239 kubelet[3393]: E0421 10:22:29.383429 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.384342 kubelet[3393]: W0421 10:22:29.384326 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.384456 kubelet[3393]: E0421 10:22:29.384443 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.384781 kubelet[3393]: E0421 10:22:29.384767 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.385631 kubelet[3393]: W0421 10:22:29.385256 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.385631 kubelet[3393]: E0421 10:22:29.385279 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.386725 kubelet[3393]: E0421 10:22:29.386710 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.386935 kubelet[3393]: W0421 10:22:29.386816 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.386935 kubelet[3393]: E0421 10:22:29.386833 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.387218 kubelet[3393]: E0421 10:22:29.387206 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.387297 kubelet[3393]: W0421 10:22:29.387286 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.387372 kubelet[3393]: E0421 10:22:29.387361 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.389406 kubelet[3393]: E0421 10:22:29.389203 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.389406 kubelet[3393]: W0421 10:22:29.389220 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.389406 kubelet[3393]: E0421 10:22:29.389236 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.390642 kubelet[3393]: E0421 10:22:29.389750 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.390642 kubelet[3393]: W0421 10:22:29.389764 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.390642 kubelet[3393]: E0421 10:22:29.389778 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.391148 kubelet[3393]: E0421 10:22:29.391003 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.391148 kubelet[3393]: W0421 10:22:29.391017 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.391148 kubelet[3393]: E0421 10:22:29.391030 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.394699 kubelet[3393]: E0421 10:22:29.393599 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.394699 kubelet[3393]: W0421 10:22:29.393615 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.394699 kubelet[3393]: E0421 10:22:29.393629 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.395088 kubelet[3393]: E0421 10:22:29.395016 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.395088 kubelet[3393]: W0421 10:22:29.395031 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.395088 kubelet[3393]: E0421 10:22:29.395045 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.399151 kubelet[3393]: E0421 10:22:29.399132 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.399361 kubelet[3393]: W0421 10:22:29.399343 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.399451 kubelet[3393]: E0421 10:22:29.399438 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.420207 containerd[1975]: time="2026-04-21T10:22:29.419310720Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:29.420207 containerd[1975]: time="2026-04-21T10:22:29.419409123Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:29.420207 containerd[1975]: time="2026-04-21T10:22:29.419428544Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:29.423795 containerd[1975]: time="2026-04-21T10:22:29.421546151Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:29.460772 systemd[1]: Started cri-containerd-b6636c7b074bfa34bd13d36661c1da7876e39a8c8758efb787bd9057e3048106.scope - libcontainer container b6636c7b074bfa34bd13d36661c1da7876e39a8c8758efb787bd9057e3048106. Apr 21 10:22:29.488362 kubelet[3393]: E0421 10:22:29.488332 3393 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 21 10:22:29.488858 kubelet[3393]: W0421 10:22:29.488764 3393 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 21 10:22:29.488858 kubelet[3393]: E0421 10:22:29.488797 3393 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 21 10:22:29.511896 systemd[1]: Started cri-containerd-28a49230305eb94b3425f13b6734d9cd2b6337e739d584c91581a2fd2fe37ae0.scope - libcontainer container 28a49230305eb94b3425f13b6734d9cd2b6337e739d584c91581a2fd2fe37ae0. Apr 21 10:22:29.544791 containerd[1975]: time="2026-04-21T10:22:29.544745031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fnxlr,Uid:d9719bc3-8fdc-4505-97f1-2029c7d108af,Namespace:calico-system,Attempt:0,} returns sandbox id \"28a49230305eb94b3425f13b6734d9cd2b6337e739d584c91581a2fd2fe37ae0\"" Apr 21 10:22:29.551864 containerd[1975]: time="2026-04-21T10:22:29.551244820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 21 10:22:29.577964 containerd[1975]: time="2026-04-21T10:22:29.577914931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-588df96b4b-jrhxr,Uid:61f89458-e21e-47bb-b688-9c43fcbaaec1,Namespace:calico-system,Attempt:0,} returns sandbox id \"b6636c7b074bfa34bd13d36661c1da7876e39a8c8758efb787bd9057e3048106\"" Apr 21 10:22:31.194344 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1485112468.mount: Deactivated successfully. Apr 21 10:22:31.244313 kubelet[3393]: E0421 10:22:31.244076 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kwvj" podUID="0da02c82-49c9-40d9-881a-313b594008da" Apr 21 10:22:31.332621 containerd[1975]: time="2026-04-21T10:22:31.332560616Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:31.334785 containerd[1975]: time="2026-04-21T10:22:31.334699083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Apr 21 10:22:31.337675 containerd[1975]: time="2026-04-21T10:22:31.337628281Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:31.342388 containerd[1975]: time="2026-04-21T10:22:31.342223861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:31.362574 containerd[1975]: time="2026-04-21T10:22:31.362358107Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.810956424s" Apr 21 10:22:31.362574 containerd[1975]: time="2026-04-21T10:22:31.362418586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 21 10:22:31.365989 containerd[1975]: time="2026-04-21T10:22:31.364180534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 21 10:22:31.370047 containerd[1975]: time="2026-04-21T10:22:31.370006783Z" level=info msg="CreateContainer within sandbox \"28a49230305eb94b3425f13b6734d9cd2b6337e739d584c91581a2fd2fe37ae0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 21 10:22:31.397655 containerd[1975]: time="2026-04-21T10:22:31.397598264Z" level=info msg="CreateContainer within sandbox \"28a49230305eb94b3425f13b6734d9cd2b6337e739d584c91581a2fd2fe37ae0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"493d3f4f45438d4173f02bc417b703136b30f638f907eccd7dc3993663066628\"" Apr 21 10:22:31.401062 containerd[1975]: time="2026-04-21T10:22:31.398448906Z" level=info msg="StartContainer for \"493d3f4f45438d4173f02bc417b703136b30f638f907eccd7dc3993663066628\"" Apr 21 10:22:31.454960 systemd[1]: Started cri-containerd-493d3f4f45438d4173f02bc417b703136b30f638f907eccd7dc3993663066628.scope - libcontainer container 493d3f4f45438d4173f02bc417b703136b30f638f907eccd7dc3993663066628. Apr 21 10:22:31.492463 containerd[1975]: time="2026-04-21T10:22:31.492419691Z" level=info msg="StartContainer for \"493d3f4f45438d4173f02bc417b703136b30f638f907eccd7dc3993663066628\" returns successfully" Apr 21 10:22:31.505141 systemd[1]: cri-containerd-493d3f4f45438d4173f02bc417b703136b30f638f907eccd7dc3993663066628.scope: Deactivated successfully. Apr 21 10:22:31.558300 containerd[1975]: time="2026-04-21T10:22:31.540588162Z" level=info msg="shim disconnected" id=493d3f4f45438d4173f02bc417b703136b30f638f907eccd7dc3993663066628 namespace=k8s.io Apr 21 10:22:31.558300 containerd[1975]: time="2026-04-21T10:22:31.558294363Z" level=warning msg="cleaning up after shim disconnected" id=493d3f4f45438d4173f02bc417b703136b30f638f907eccd7dc3993663066628 namespace=k8s.io Apr 21 10:22:31.558723 containerd[1975]: time="2026-04-21T10:22:31.558313812Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:22:32.150856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-493d3f4f45438d4173f02bc417b703136b30f638f907eccd7dc3993663066628-rootfs.mount: Deactivated successfully. Apr 21 10:22:33.237665 kubelet[3393]: E0421 10:22:33.237617 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kwvj" podUID="0da02c82-49c9-40d9-881a-313b594008da" Apr 21 10:22:34.015227 containerd[1975]: time="2026-04-21T10:22:34.015169235Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:34.017151 containerd[1975]: time="2026-04-21T10:22:34.017073171Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Apr 21 10:22:34.019477 containerd[1975]: time="2026-04-21T10:22:34.019411507Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:34.023549 containerd[1975]: time="2026-04-21T10:22:34.023461939Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:34.025510 containerd[1975]: time="2026-04-21T10:22:34.025186489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.660962337s" Apr 21 10:22:34.025510 containerd[1975]: time="2026-04-21T10:22:34.025236051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 21 10:22:34.027210 containerd[1975]: time="2026-04-21T10:22:34.027173681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 21 10:22:34.053119 containerd[1975]: time="2026-04-21T10:22:34.053078713Z" level=info msg="CreateContainer within sandbox \"b6636c7b074bfa34bd13d36661c1da7876e39a8c8758efb787bd9057e3048106\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 21 10:22:34.082146 containerd[1975]: time="2026-04-21T10:22:34.082078594Z" level=info msg="CreateContainer within sandbox \"b6636c7b074bfa34bd13d36661c1da7876e39a8c8758efb787bd9057e3048106\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ed0fbf9723dfacaa69cd484cf0fd08109a92c397ed3d941a1751463f3aa50307\"" Apr 21 10:22:34.084273 containerd[1975]: time="2026-04-21T10:22:34.082911850Z" level=info msg="StartContainer for \"ed0fbf9723dfacaa69cd484cf0fd08109a92c397ed3d941a1751463f3aa50307\"" Apr 21 10:22:34.120868 systemd[1]: Started cri-containerd-ed0fbf9723dfacaa69cd484cf0fd08109a92c397ed3d941a1751463f3aa50307.scope - libcontainer container ed0fbf9723dfacaa69cd484cf0fd08109a92c397ed3d941a1751463f3aa50307. Apr 21 10:22:34.176692 containerd[1975]: time="2026-04-21T10:22:34.176642703Z" level=info msg="StartContainer for \"ed0fbf9723dfacaa69cd484cf0fd08109a92c397ed3d941a1751463f3aa50307\" returns successfully" Apr 21 10:22:35.238300 kubelet[3393]: E0421 10:22:35.237932 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kwvj" podUID="0da02c82-49c9-40d9-881a-313b594008da" Apr 21 10:22:35.358495 kubelet[3393]: I0421 10:22:35.358455 3393 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:22:37.240328 kubelet[3393]: E0421 10:22:37.238497 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kwvj" podUID="0da02c82-49c9-40d9-881a-313b594008da" Apr 21 10:22:39.238030 kubelet[3393]: E0421 10:22:39.237964 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kwvj" podUID="0da02c82-49c9-40d9-881a-313b594008da" Apr 21 10:22:41.239320 kubelet[3393]: E0421 10:22:41.238728 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kwvj" podUID="0da02c82-49c9-40d9-881a-313b594008da" Apr 21 10:22:43.237842 kubelet[3393]: E0421 10:22:43.237785 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kwvj" podUID="0da02c82-49c9-40d9-881a-313b594008da" Apr 21 10:22:44.660953 kubelet[3393]: I0421 10:22:44.658878 3393 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:22:44.844859 kubelet[3393]: I0421 10:22:44.841311 3393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-588df96b4b-jrhxr" podStartSLOduration=12.383814206 podStartE2EDuration="16.83131246s" podCreationTimestamp="2026-04-21 10:22:28 +0000 UTC" firstStartedPulling="2026-04-21 10:22:29.579374332 +0000 UTC m=+17.556329410" lastFinishedPulling="2026-04-21 10:22:34.026872576 +0000 UTC m=+22.003827664" observedRunningTime="2026-04-21 10:22:34.393912684 +0000 UTC m=+22.370867778" watchObservedRunningTime="2026-04-21 10:22:44.83131246 +0000 UTC m=+32.808267554" Apr 21 10:22:45.238683 kubelet[3393]: E0421 10:22:45.238090 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kwvj" podUID="0da02c82-49c9-40d9-881a-313b594008da" Apr 21 10:22:45.294839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4217852218.mount: Deactivated successfully. Apr 21 10:22:45.359554 containerd[1975]: time="2026-04-21T10:22:45.355834123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 21 10:22:45.359554 containerd[1975]: time="2026-04-21T10:22:45.352501267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:45.361140 containerd[1975]: time="2026-04-21T10:22:45.361082054Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:45.364587 containerd[1975]: time="2026-04-21T10:22:45.364485656Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:45.365782 containerd[1975]: time="2026-04-21T10:22:45.365183671Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 11.337967937s" Apr 21 10:22:45.365782 containerd[1975]: time="2026-04-21T10:22:45.365224103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 21 10:22:45.416989 containerd[1975]: time="2026-04-21T10:22:45.416943571Z" level=info msg="CreateContainer within sandbox \"28a49230305eb94b3425f13b6734d9cd2b6337e739d584c91581a2fd2fe37ae0\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 21 10:22:45.478859 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990131276.mount: Deactivated successfully. Apr 21 10:22:45.504291 containerd[1975]: time="2026-04-21T10:22:45.504162959Z" level=info msg="CreateContainer within sandbox \"28a49230305eb94b3425f13b6734d9cd2b6337e739d584c91581a2fd2fe37ae0\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"bba067f0a2349e2b2d8e5891944ecdd5e4af321905c44c556ab335d619ee1184\"" Apr 21 10:22:45.505726 containerd[1975]: time="2026-04-21T10:22:45.505489108Z" level=info msg="StartContainer for \"bba067f0a2349e2b2d8e5891944ecdd5e4af321905c44c556ab335d619ee1184\"" Apr 21 10:22:45.578056 systemd[1]: Started cri-containerd-bba067f0a2349e2b2d8e5891944ecdd5e4af321905c44c556ab335d619ee1184.scope - libcontainer container bba067f0a2349e2b2d8e5891944ecdd5e4af321905c44c556ab335d619ee1184. Apr 21 10:22:45.634505 containerd[1975]: time="2026-04-21T10:22:45.634363750Z" level=info msg="StartContainer for \"bba067f0a2349e2b2d8e5891944ecdd5e4af321905c44c556ab335d619ee1184\" returns successfully" Apr 21 10:22:45.711802 systemd[1]: cri-containerd-bba067f0a2349e2b2d8e5891944ecdd5e4af321905c44c556ab335d619ee1184.scope: Deactivated successfully. Apr 21 10:22:45.822708 containerd[1975]: time="2026-04-21T10:22:45.814182252Z" level=info msg="shim disconnected" id=bba067f0a2349e2b2d8e5891944ecdd5e4af321905c44c556ab335d619ee1184 namespace=k8s.io Apr 21 10:22:45.822708 containerd[1975]: time="2026-04-21T10:22:45.822697281Z" level=warning msg="cleaning up after shim disconnected" id=bba067f0a2349e2b2d8e5891944ecdd5e4af321905c44c556ab335d619ee1184 namespace=k8s.io Apr 21 10:22:45.822708 containerd[1975]: time="2026-04-21T10:22:45.822716570Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:22:46.297384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bba067f0a2349e2b2d8e5891944ecdd5e4af321905c44c556ab335d619ee1184-rootfs.mount: Deactivated successfully. Apr 21 10:22:46.409975 containerd[1975]: time="2026-04-21T10:22:46.409918254Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 21 10:22:47.238288 kubelet[3393]: E0421 10:22:47.238213 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kwvj" podUID="0da02c82-49c9-40d9-881a-313b594008da" Apr 21 10:22:49.238551 kubelet[3393]: E0421 10:22:49.238369 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kwvj" podUID="0da02c82-49c9-40d9-881a-313b594008da" Apr 21 10:22:50.728036 containerd[1975]: time="2026-04-21T10:22:50.727979274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:50.729883 containerd[1975]: time="2026-04-21T10:22:50.729811425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 21 10:22:50.732583 containerd[1975]: time="2026-04-21T10:22:50.732200655Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:50.736186 containerd[1975]: time="2026-04-21T10:22:50.736142768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:50.738038 containerd[1975]: time="2026-04-21T10:22:50.737982942Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.327524343s" Apr 21 10:22:50.738038 containerd[1975]: time="2026-04-21T10:22:50.738015931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 21 10:22:50.746052 containerd[1975]: time="2026-04-21T10:22:50.746004483Z" level=info msg="CreateContainer within sandbox \"28a49230305eb94b3425f13b6734d9cd2b6337e739d584c91581a2fd2fe37ae0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 21 10:22:50.783625 containerd[1975]: time="2026-04-21T10:22:50.783199419Z" level=info msg="CreateContainer within sandbox \"28a49230305eb94b3425f13b6734d9cd2b6337e739d584c91581a2fd2fe37ae0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"159fe0d584835c4699669b107fa410faccd46cee8c5cd2b13a9f503ea57c9a4d\"" Apr 21 10:22:50.784072 containerd[1975]: time="2026-04-21T10:22:50.784041234Z" level=info msg="StartContainer for \"159fe0d584835c4699669b107fa410faccd46cee8c5cd2b13a9f503ea57c9a4d\"" Apr 21 10:22:50.829110 systemd[1]: run-containerd-runc-k8s.io-159fe0d584835c4699669b107fa410faccd46cee8c5cd2b13a9f503ea57c9a4d-runc.9Z0nSH.mount: Deactivated successfully. Apr 21 10:22:50.837747 systemd[1]: Started cri-containerd-159fe0d584835c4699669b107fa410faccd46cee8c5cd2b13a9f503ea57c9a4d.scope - libcontainer container 159fe0d584835c4699669b107fa410faccd46cee8c5cd2b13a9f503ea57c9a4d. Apr 21 10:22:50.875355 containerd[1975]: time="2026-04-21T10:22:50.875298577Z" level=info msg="StartContainer for \"159fe0d584835c4699669b107fa410faccd46cee8c5cd2b13a9f503ea57c9a4d\" returns successfully" Apr 21 10:22:51.238861 kubelet[3393]: E0421 10:22:51.238811 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5kwvj" podUID="0da02c82-49c9-40d9-881a-313b594008da" Apr 21 10:22:51.943431 systemd[1]: cri-containerd-159fe0d584835c4699669b107fa410faccd46cee8c5cd2b13a9f503ea57c9a4d.scope: Deactivated successfully. Apr 21 10:22:51.995726 kubelet[3393]: I0421 10:22:51.994424 3393 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 21 10:22:52.005771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-159fe0d584835c4699669b107fa410faccd46cee8c5cd2b13a9f503ea57c9a4d-rootfs.mount: Deactivated successfully. Apr 21 10:22:52.018541 containerd[1975]: time="2026-04-21T10:22:52.018237406Z" level=info msg="shim disconnected" id=159fe0d584835c4699669b107fa410faccd46cee8c5cd2b13a9f503ea57c9a4d namespace=k8s.io Apr 21 10:22:52.019744 containerd[1975]: time="2026-04-21T10:22:52.018344040Z" level=warning msg="cleaning up after shim disconnected" id=159fe0d584835c4699669b107fa410faccd46cee8c5cd2b13a9f503ea57c9a4d namespace=k8s.io Apr 21 10:22:52.019744 containerd[1975]: time="2026-04-21T10:22:52.018940735Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:22:52.046724 containerd[1975]: time="2026-04-21T10:22:52.046663024Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:22:52Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:22:52.110244 systemd[1]: Created slice kubepods-burstable-pode0245abf_1cb1_48b9_b736_006cc52f0a7d.slice - libcontainer container kubepods-burstable-pode0245abf_1cb1_48b9_b736_006cc52f0a7d.slice. Apr 21 10:22:52.135839 kubelet[3393]: I0421 10:22:52.135665 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjwnl\" (UniqueName: \"kubernetes.io/projected/e0245abf-1cb1-48b9-b736-006cc52f0a7d-kube-api-access-mjwnl\") pod \"coredns-674b8bbfcf-kq922\" (UID: \"e0245abf-1cb1-48b9-b736-006cc52f0a7d\") " pod="kube-system/coredns-674b8bbfcf-kq922" Apr 21 10:22:52.135839 kubelet[3393]: I0421 10:22:52.135750 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0245abf-1cb1-48b9-b736-006cc52f0a7d-config-volume\") pod \"coredns-674b8bbfcf-kq922\" (UID: \"e0245abf-1cb1-48b9-b736-006cc52f0a7d\") " pod="kube-system/coredns-674b8bbfcf-kq922" Apr 21 10:22:52.141248 systemd[1]: Created slice kubepods-besteffort-pod126f393b_3d88_44db_b88f_944f8fffc842.slice - libcontainer container kubepods-besteffort-pod126f393b_3d88_44db_b88f_944f8fffc842.slice. Apr 21 10:22:52.175685 systemd[1]: Created slice kubepods-besteffort-pod18a66424_432c_43b2_9b85_3b805c5d2979.slice - libcontainer container kubepods-besteffort-pod18a66424_432c_43b2_9b85_3b805c5d2979.slice. Apr 21 10:22:52.215751 systemd[1]: Created slice kubepods-besteffort-podd0515f99_cc48_4126_aab8_41d534ccbd0f.slice - libcontainer container kubepods-besteffort-podd0515f99_cc48_4126_aab8_41d534ccbd0f.slice. Apr 21 10:22:52.253429 kubelet[3393]: I0421 10:22:52.253369 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/126f393b-3d88-44db-b88f-944f8fffc842-config\") pod \"goldmane-5b85766d88-hsqq5\" (UID: \"126f393b-3d88-44db-b88f-944f8fffc842\") " pod="calico-system/goldmane-5b85766d88-hsqq5" Apr 21 10:22:52.255292 kubelet[3393]: I0421 10:22:52.253608 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/18a66424-432c-43b2-9b85-3b805c5d2979-whisker-backend-key-pair\") pod \"whisker-54c9c74cc-nh8pr\" (UID: \"18a66424-432c-43b2-9b85-3b805c5d2979\") " pod="calico-system/whisker-54c9c74cc-nh8pr" Apr 21 10:22:52.255292 kubelet[3393]: I0421 10:22:52.253677 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18a66424-432c-43b2-9b85-3b805c5d2979-whisker-ca-bundle\") pod \"whisker-54c9c74cc-nh8pr\" (UID: \"18a66424-432c-43b2-9b85-3b805c5d2979\") " pod="calico-system/whisker-54c9c74cc-nh8pr" Apr 21 10:22:52.255292 kubelet[3393]: I0421 10:22:52.253918 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/126f393b-3d88-44db-b88f-944f8fffc842-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-hsqq5\" (UID: \"126f393b-3d88-44db-b88f-944f8fffc842\") " pod="calico-system/goldmane-5b85766d88-hsqq5" Apr 21 10:22:52.255292 kubelet[3393]: I0421 10:22:52.253950 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/126f393b-3d88-44db-b88f-944f8fffc842-goldmane-key-pair\") pod \"goldmane-5b85766d88-hsqq5\" (UID: \"126f393b-3d88-44db-b88f-944f8fffc842\") " pod="calico-system/goldmane-5b85766d88-hsqq5" Apr 21 10:22:52.255292 kubelet[3393]: I0421 10:22:52.254082 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/18a66424-432c-43b2-9b85-3b805c5d2979-nginx-config\") pod \"whisker-54c9c74cc-nh8pr\" (UID: \"18a66424-432c-43b2-9b85-3b805c5d2979\") " pod="calico-system/whisker-54c9c74cc-nh8pr" Apr 21 10:22:52.258102 kubelet[3393]: I0421 10:22:52.254109 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d0515f99-cc48-4126-aab8-41d534ccbd0f-calico-apiserver-certs\") pod \"calico-apiserver-7b4964dbc6-vq6mv\" (UID: \"d0515f99-cc48-4126-aab8-41d534ccbd0f\") " pod="calico-system/calico-apiserver-7b4964dbc6-vq6mv" Apr 21 10:22:52.258102 kubelet[3393]: I0421 10:22:52.254346 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83fea0e3-1b93-4c76-acf8-8d0eb96c26b9-tigera-ca-bundle\") pod \"calico-kube-controllers-687db6948-llqks\" (UID: \"83fea0e3-1b93-4c76-acf8-8d0eb96c26b9\") " pod="calico-system/calico-kube-controllers-687db6948-llqks" Apr 21 10:22:52.258102 kubelet[3393]: I0421 10:22:52.254441 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgwqc\" (UniqueName: \"kubernetes.io/projected/18a66424-432c-43b2-9b85-3b805c5d2979-kube-api-access-xgwqc\") pod \"whisker-54c9c74cc-nh8pr\" (UID: \"18a66424-432c-43b2-9b85-3b805c5d2979\") " pod="calico-system/whisker-54c9c74cc-nh8pr" Apr 21 10:22:52.258102 kubelet[3393]: I0421 10:22:52.254491 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jzq2\" (UniqueName: \"kubernetes.io/projected/d0515f99-cc48-4126-aab8-41d534ccbd0f-kube-api-access-8jzq2\") pod \"calico-apiserver-7b4964dbc6-vq6mv\" (UID: \"d0515f99-cc48-4126-aab8-41d534ccbd0f\") " pod="calico-system/calico-apiserver-7b4964dbc6-vq6mv" Apr 21 10:22:52.258102 kubelet[3393]: I0421 10:22:52.254556 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4fb7221c-ea69-4ca4-82b5-711eb8fdfc35-config-volume\") pod \"coredns-674b8bbfcf-q9mxk\" (UID: \"4fb7221c-ea69-4ca4-82b5-711eb8fdfc35\") " pod="kube-system/coredns-674b8bbfcf-q9mxk" Apr 21 10:22:52.258333 kubelet[3393]: I0421 10:22:52.254583 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/36e30077-4d0e-4cfb-85c1-e2be8e459364-calico-apiserver-certs\") pod \"calico-apiserver-7b4964dbc6-crhn4\" (UID: \"36e30077-4d0e-4cfb-85c1-e2be8e459364\") " pod="calico-system/calico-apiserver-7b4964dbc6-crhn4" Apr 21 10:22:52.258333 kubelet[3393]: I0421 10:22:52.254619 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slr5s\" (UniqueName: \"kubernetes.io/projected/36e30077-4d0e-4cfb-85c1-e2be8e459364-kube-api-access-slr5s\") pod \"calico-apiserver-7b4964dbc6-crhn4\" (UID: \"36e30077-4d0e-4cfb-85c1-e2be8e459364\") " pod="calico-system/calico-apiserver-7b4964dbc6-crhn4" Apr 21 10:22:52.258333 kubelet[3393]: I0421 10:22:52.254645 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dblf\" (UniqueName: \"kubernetes.io/projected/83fea0e3-1b93-4c76-acf8-8d0eb96c26b9-kube-api-access-8dblf\") pod \"calico-kube-controllers-687db6948-llqks\" (UID: \"83fea0e3-1b93-4c76-acf8-8d0eb96c26b9\") " pod="calico-system/calico-kube-controllers-687db6948-llqks" Apr 21 10:22:52.258333 kubelet[3393]: I0421 10:22:52.254715 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g676s\" (UniqueName: \"kubernetes.io/projected/4fb7221c-ea69-4ca4-82b5-711eb8fdfc35-kube-api-access-g676s\") pod \"coredns-674b8bbfcf-q9mxk\" (UID: \"4fb7221c-ea69-4ca4-82b5-711eb8fdfc35\") " pod="kube-system/coredns-674b8bbfcf-q9mxk" Apr 21 10:22:52.258333 kubelet[3393]: I0421 10:22:52.254741 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx7nc\" (UniqueName: \"kubernetes.io/projected/126f393b-3d88-44db-b88f-944f8fffc842-kube-api-access-zx7nc\") pod \"goldmane-5b85766d88-hsqq5\" (UID: \"126f393b-3d88-44db-b88f-944f8fffc842\") " pod="calico-system/goldmane-5b85766d88-hsqq5" Apr 21 10:22:52.262461 systemd[1]: Created slice kubepods-besteffort-pod36e30077_4d0e_4cfb_85c1_e2be8e459364.slice - libcontainer container kubepods-besteffort-pod36e30077_4d0e_4cfb_85c1_e2be8e459364.slice. Apr 21 10:22:52.295871 systemd[1]: Created slice kubepods-besteffort-pod83fea0e3_1b93_4c76_acf8_8d0eb96c26b9.slice - libcontainer container kubepods-besteffort-pod83fea0e3_1b93_4c76_acf8_8d0eb96c26b9.slice. Apr 21 10:22:52.304260 systemd[1]: Created slice kubepods-burstable-pod4fb7221c_ea69_4ca4_82b5_711eb8fdfc35.slice - libcontainer container kubepods-burstable-pod4fb7221c_ea69_4ca4_82b5_711eb8fdfc35.slice. Apr 21 10:22:52.439450 containerd[1975]: time="2026-04-21T10:22:52.439142568Z" level=info msg="CreateContainer within sandbox \"28a49230305eb94b3425f13b6734d9cd2b6337e739d584c91581a2fd2fe37ae0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 21 10:22:52.444556 containerd[1975]: time="2026-04-21T10:22:52.444496468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kq922,Uid:e0245abf-1cb1-48b9-b736-006cc52f0a7d,Namespace:kube-system,Attempt:0,}" Apr 21 10:22:52.470782 containerd[1975]: time="2026-04-21T10:22:52.470654977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-hsqq5,Uid:126f393b-3d88-44db-b88f-944f8fffc842,Namespace:calico-system,Attempt:0,}" Apr 21 10:22:52.493771 containerd[1975]: time="2026-04-21T10:22:52.492867285Z" level=info msg="CreateContainer within sandbox \"28a49230305eb94b3425f13b6734d9cd2b6337e739d584c91581a2fd2fe37ae0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d2686ca1f75b195969bd5d21b6dfdae469c9f5bd8646fd098bad8ccb0091bd11\"" Apr 21 10:22:52.494277 containerd[1975]: time="2026-04-21T10:22:52.494230148Z" level=info msg="StartContainer for \"d2686ca1f75b195969bd5d21b6dfdae469c9f5bd8646fd098bad8ccb0091bd11\"" Apr 21 10:22:52.519053 containerd[1975]: time="2026-04-21T10:22:52.518999684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54c9c74cc-nh8pr,Uid:18a66424-432c-43b2-9b85-3b805c5d2979,Namespace:calico-system,Attempt:0,}" Apr 21 10:22:52.555511 systemd[1]: Started cri-containerd-d2686ca1f75b195969bd5d21b6dfdae469c9f5bd8646fd098bad8ccb0091bd11.scope - libcontainer container d2686ca1f75b195969bd5d21b6dfdae469c9f5bd8646fd098bad8ccb0091bd11. Apr 21 10:22:52.565474 containerd[1975]: time="2026-04-21T10:22:52.565012559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4964dbc6-vq6mv,Uid:d0515f99-cc48-4126-aab8-41d534ccbd0f,Namespace:calico-system,Attempt:0,}" Apr 21 10:22:52.640777 containerd[1975]: time="2026-04-21T10:22:52.640730296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q9mxk,Uid:4fb7221c-ea69-4ca4-82b5-711eb8fdfc35,Namespace:kube-system,Attempt:0,}" Apr 21 10:22:52.641424 containerd[1975]: time="2026-04-21T10:22:52.641382073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4964dbc6-crhn4,Uid:36e30077-4d0e-4cfb-85c1-e2be8e459364,Namespace:calico-system,Attempt:0,}" Apr 21 10:22:52.641674 containerd[1975]: time="2026-04-21T10:22:52.641644696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-687db6948-llqks,Uid:83fea0e3-1b93-4c76-acf8-8d0eb96c26b9,Namespace:calico-system,Attempt:0,}" Apr 21 10:22:52.679760 containerd[1975]: time="2026-04-21T10:22:52.679711972Z" level=info msg="StartContainer for \"d2686ca1f75b195969bd5d21b6dfdae469c9f5bd8646fd098bad8ccb0091bd11\" returns successfully" Apr 21 10:22:53.151591 containerd[1975]: time="2026-04-21T10:22:53.151418828Z" level=error msg="Failed to destroy network for sandbox \"6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.162806 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade-shm.mount: Deactivated successfully. Apr 21 10:22:53.171715 containerd[1975]: time="2026-04-21T10:22:53.171347189Z" level=error msg="Failed to destroy network for sandbox \"d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.181293 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1-shm.mount: Deactivated successfully. Apr 21 10:22:53.187093 containerd[1975]: time="2026-04-21T10:22:53.187019291Z" level=error msg="encountered an error cleaning up failed sandbox \"d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.187233 containerd[1975]: time="2026-04-21T10:22:53.187158418Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4964dbc6-crhn4,Uid:36e30077-4d0e-4cfb-85c1-e2be8e459364,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.188317 containerd[1975]: time="2026-04-21T10:22:53.187281488Z" level=error msg="encountered an error cleaning up failed sandbox \"6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.188317 containerd[1975]: time="2026-04-21T10:22:53.187335351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4964dbc6-vq6mv,Uid:d0515f99-cc48-4126-aab8-41d534ccbd0f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.211016 kubelet[3393]: E0421 10:22:53.210953 3393 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.211209 kubelet[3393]: E0421 10:22:53.211110 3393 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.213481 kubelet[3393]: E0421 10:22:53.211911 3393 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7b4964dbc6-crhn4" Apr 21 10:22:53.214257 kubelet[3393]: E0421 10:22:53.213962 3393 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7b4964dbc6-vq6mv" Apr 21 10:22:53.216867 kubelet[3393]: E0421 10:22:53.216770 3393 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7b4964dbc6-vq6mv" Apr 21 10:22:53.216968 kubelet[3393]: E0421 10:22:53.216880 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b4964dbc6-vq6mv_calico-system(d0515f99-cc48-4126-aab8-41d534ccbd0f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b4964dbc6-vq6mv_calico-system(d0515f99-cc48-4126-aab8-41d534ccbd0f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7b4964dbc6-vq6mv" podUID="d0515f99-cc48-4126-aab8-41d534ccbd0f" Apr 21 10:22:53.217937 kubelet[3393]: E0421 10:22:53.217853 3393 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7b4964dbc6-crhn4" Apr 21 10:22:53.218423 kubelet[3393]: E0421 10:22:53.217955 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b4964dbc6-crhn4_calico-system(36e30077-4d0e-4cfb-85c1-e2be8e459364)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b4964dbc6-crhn4_calico-system(36e30077-4d0e-4cfb-85c1-e2be8e459364)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7b4964dbc6-crhn4" podUID="36e30077-4d0e-4cfb-85c1-e2be8e459364" Apr 21 10:22:53.240020 containerd[1975]: time="2026-04-21T10:22:53.237477745Z" level=error msg="Failed to destroy network for sandbox \"95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.242334 containerd[1975]: time="2026-04-21T10:22:53.242285897Z" level=error msg="Failed to destroy network for sandbox \"b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.244354 containerd[1975]: time="2026-04-21T10:22:53.242835287Z" level=error msg="encountered an error cleaning up failed sandbox \"b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.244354 containerd[1975]: time="2026-04-21T10:22:53.242917677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54c9c74cc-nh8pr,Uid:18a66424-432c-43b2-9b85-3b805c5d2979,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.244555 kubelet[3393]: E0421 10:22:53.243218 3393 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.244555 kubelet[3393]: E0421 10:22:53.243283 3393 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54c9c74cc-nh8pr" Apr 21 10:22:53.244555 kubelet[3393]: E0421 10:22:53.243315 3393 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54c9c74cc-nh8pr" Apr 21 10:22:53.244720 kubelet[3393]: E0421 10:22:53.243385 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54c9c74cc-nh8pr_calico-system(18a66424-432c-43b2-9b85-3b805c5d2979)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54c9c74cc-nh8pr_calico-system(18a66424-432c-43b2-9b85-3b805c5d2979)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54c9c74cc-nh8pr" podUID="18a66424-432c-43b2-9b85-3b805c5d2979" Apr 21 10:22:53.246146 containerd[1975]: time="2026-04-21T10:22:53.245844346Z" level=error msg="encountered an error cleaning up failed sandbox \"95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.246146 containerd[1975]: time="2026-04-21T10:22:53.245914985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-hsqq5,Uid:126f393b-3d88-44db-b88f-944f8fffc842,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.248006 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a-shm.mount: Deactivated successfully. Apr 21 10:22:53.254817 kubelet[3393]: E0421 10:22:53.254216 3393 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.254817 kubelet[3393]: E0421 10:22:53.254282 3393 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-hsqq5" Apr 21 10:22:53.254817 kubelet[3393]: E0421 10:22:53.254312 3393 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-hsqq5" Apr 21 10:22:53.255342 kubelet[3393]: E0421 10:22:53.254385 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-hsqq5_calico-system(126f393b-3d88-44db-b88f-944f8fffc842)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-hsqq5_calico-system(126f393b-3d88-44db-b88f-944f8fffc842)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-hsqq5" podUID="126f393b-3d88-44db-b88f-944f8fffc842" Apr 21 10:22:53.256719 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5-shm.mount: Deactivated successfully. Apr 21 10:22:53.257190 containerd[1975]: time="2026-04-21T10:22:53.256724666Z" level=error msg="Failed to destroy network for sandbox \"01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.257190 containerd[1975]: time="2026-04-21T10:22:53.256895086Z" level=error msg="Failed to destroy network for sandbox \"40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.257996 containerd[1975]: time="2026-04-21T10:22:53.257653445Z" level=error msg="encountered an error cleaning up failed sandbox \"01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.257996 containerd[1975]: time="2026-04-21T10:22:53.257875085Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-687db6948-llqks,Uid:83fea0e3-1b93-4c76-acf8-8d0eb96c26b9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.259200 kubelet[3393]: E0421 10:22:53.258942 3393 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.259200 kubelet[3393]: E0421 10:22:53.259008 3393 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-687db6948-llqks" Apr 21 10:22:53.259969 kubelet[3393]: E0421 10:22:53.259491 3393 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-687db6948-llqks" Apr 21 10:22:53.259969 kubelet[3393]: E0421 10:22:53.259655 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-687db6948-llqks_calico-system(83fea0e3-1b93-4c76-acf8-8d0eb96c26b9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-687db6948-llqks_calico-system(83fea0e3-1b93-4c76-acf8-8d0eb96c26b9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-687db6948-llqks" podUID="83fea0e3-1b93-4c76-acf8-8d0eb96c26b9" Apr 21 10:22:53.260614 containerd[1975]: time="2026-04-21T10:22:53.260455448Z" level=error msg="encountered an error cleaning up failed sandbox \"40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.260927 containerd[1975]: time="2026-04-21T10:22:53.260739533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kq922,Uid:e0245abf-1cb1-48b9-b736-006cc52f0a7d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.261821 kubelet[3393]: E0421 10:22:53.261759 3393 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.261821 kubelet[3393]: E0421 10:22:53.261807 3393 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kq922" Apr 21 10:22:53.262005 kubelet[3393]: E0421 10:22:53.261836 3393 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-kq922" Apr 21 10:22:53.262005 kubelet[3393]: E0421 10:22:53.261889 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-kq922_kube-system(e0245abf-1cb1-48b9-b736-006cc52f0a7d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-kq922_kube-system(e0245abf-1cb1-48b9-b736-006cc52f0a7d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-kq922" podUID="e0245abf-1cb1-48b9-b736-006cc52f0a7d" Apr 21 10:22:53.270194 systemd[1]: Created slice kubepods-besteffort-pod0da02c82_49c9_40d9_881a_313b594008da.slice - libcontainer container kubepods-besteffort-pod0da02c82_49c9_40d9_881a_313b594008da.slice. Apr 21 10:22:53.273722 containerd[1975]: time="2026-04-21T10:22:53.273113041Z" level=error msg="Failed to destroy network for sandbox \"bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.273722 containerd[1975]: time="2026-04-21T10:22:53.273471447Z" level=error msg="encountered an error cleaning up failed sandbox \"bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.273722 containerd[1975]: time="2026-04-21T10:22:53.273593543Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q9mxk,Uid:4fb7221c-ea69-4ca4-82b5-711eb8fdfc35,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.273934 kubelet[3393]: E0421 10:22:53.273870 3393 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.274004 kubelet[3393]: E0421 10:22:53.273935 3393 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q9mxk" Apr 21 10:22:53.274004 kubelet[3393]: E0421 10:22:53.273964 3393 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-q9mxk" Apr 21 10:22:53.274117 kubelet[3393]: E0421 10:22:53.274019 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-q9mxk_kube-system(4fb7221c-ea69-4ca4-82b5-711eb8fdfc35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-q9mxk_kube-system(4fb7221c-ea69-4ca4-82b5-711eb8fdfc35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-q9mxk" podUID="4fb7221c-ea69-4ca4-82b5-711eb8fdfc35" Apr 21 10:22:53.275514 containerd[1975]: time="2026-04-21T10:22:53.275409175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kwvj,Uid:0da02c82-49c9-40d9-881a-313b594008da,Namespace:calico-system,Attempt:0,}" Apr 21 10:22:53.366330 containerd[1975]: time="2026-04-21T10:22:53.366280224Z" level=error msg="Failed to destroy network for sandbox \"1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.366694 containerd[1975]: time="2026-04-21T10:22:53.366660444Z" level=error msg="encountered an error cleaning up failed sandbox \"1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.366799 containerd[1975]: time="2026-04-21T10:22:53.366727655Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kwvj,Uid:0da02c82-49c9-40d9-881a-313b594008da,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.367003 kubelet[3393]: E0421 10:22:53.366967 3393 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 21 10:22:53.367101 kubelet[3393]: E0421 10:22:53.367028 3393 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5kwvj" Apr 21 10:22:53.367101 kubelet[3393]: E0421 10:22:53.367055 3393 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5kwvj" Apr 21 10:22:53.367252 kubelet[3393]: E0421 10:22:53.367125 3393 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5kwvj_calico-system(0da02c82-49c9-40d9-881a-313b594008da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5kwvj_calico-system(0da02c82-49c9-40d9-881a-313b594008da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5kwvj" podUID="0da02c82-49c9-40d9-881a-313b594008da" Apr 21 10:22:53.427452 kubelet[3393]: I0421 10:22:53.426788 3393 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Apr 21 10:22:53.487782 kubelet[3393]: I0421 10:22:53.487701 3393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fnxlr" podStartSLOduration=4.299168522 podStartE2EDuration="25.487679246s" podCreationTimestamp="2026-04-21 10:22:28 +0000 UTC" firstStartedPulling="2026-04-21 10:22:29.550814092 +0000 UTC m=+17.527769185" lastFinishedPulling="2026-04-21 10:22:50.739324838 +0000 UTC m=+38.716279909" observedRunningTime="2026-04-21 10:22:53.486942675 +0000 UTC m=+41.463897780" watchObservedRunningTime="2026-04-21 10:22:53.487679246 +0000 UTC m=+41.464634342" Apr 21 10:22:53.539275 containerd[1975]: time="2026-04-21T10:22:53.538288473Z" level=info msg="StopPodSandbox for \"95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a\"" Apr 21 10:22:53.544856 containerd[1975]: time="2026-04-21T10:22:53.544300746Z" level=info msg="Ensure that sandbox 95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a in task-service has been cleanup successfully" Apr 21 10:22:53.556964 kubelet[3393]: I0421 10:22:53.556917 3393 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Apr 21 10:22:53.560874 containerd[1975]: time="2026-04-21T10:22:53.560739343Z" level=info msg="StopPodSandbox for \"b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5\"" Apr 21 10:22:53.562810 containerd[1975]: time="2026-04-21T10:22:53.562770164Z" level=info msg="Ensure that sandbox b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5 in task-service has been cleanup successfully" Apr 21 10:22:53.606944 kubelet[3393]: I0421 10:22:53.604917 3393 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Apr 21 10:22:53.607379 containerd[1975]: time="2026-04-21T10:22:53.607342732Z" level=info msg="StopPodSandbox for \"1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c\"" Apr 21 10:22:53.607609 containerd[1975]: time="2026-04-21T10:22:53.607584972Z" level=info msg="Ensure that sandbox 1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c in task-service has been cleanup successfully" Apr 21 10:22:53.621920 kubelet[3393]: I0421 10:22:53.620743 3393 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Apr 21 10:22:53.622628 containerd[1975]: time="2026-04-21T10:22:53.621800616Z" level=info msg="StopPodSandbox for \"40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe\"" Apr 21 10:22:53.624175 containerd[1975]: time="2026-04-21T10:22:53.624136085Z" level=info msg="Ensure that sandbox 40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe in task-service has been cleanup successfully" Apr 21 10:22:53.647006 kubelet[3393]: I0421 10:22:53.646970 3393 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Apr 21 10:22:53.649869 containerd[1975]: time="2026-04-21T10:22:53.649324849Z" level=info msg="StopPodSandbox for \"01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed\"" Apr 21 10:22:53.649869 containerd[1975]: time="2026-04-21T10:22:53.649559043Z" level=info msg="Ensure that sandbox 01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed in task-service has been cleanup successfully" Apr 21 10:22:53.667511 kubelet[3393]: I0421 10:22:53.667472 3393 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Apr 21 10:22:53.671919 containerd[1975]: time="2026-04-21T10:22:53.671876420Z" level=info msg="StopPodSandbox for \"d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1\"" Apr 21 10:22:53.678372 containerd[1975]: time="2026-04-21T10:22:53.677873552Z" level=info msg="Ensure that sandbox d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1 in task-service has been cleanup successfully" Apr 21 10:22:53.685026 kubelet[3393]: I0421 10:22:53.683315 3393 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Apr 21 10:22:53.689550 containerd[1975]: time="2026-04-21T10:22:53.688312165Z" level=info msg="StopPodSandbox for \"bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be\"" Apr 21 10:22:53.693232 containerd[1975]: time="2026-04-21T10:22:53.690945058Z" level=info msg="Ensure that sandbox bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be in task-service has been cleanup successfully" Apr 21 10:22:53.708148 kubelet[3393]: I0421 10:22:53.708041 3393 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Apr 21 10:22:53.711555 containerd[1975]: time="2026-04-21T10:22:53.710044545Z" level=info msg="StopPodSandbox for \"6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade\"" Apr 21 10:22:53.711555 containerd[1975]: time="2026-04-21T10:22:53.710261169Z" level=info msg="Ensure that sandbox 6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade in task-service has been cleanup successfully" Apr 21 10:22:54.015453 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be-shm.mount: Deactivated successfully. Apr 21 10:22:54.015609 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed-shm.mount: Deactivated successfully. Apr 21 10:22:54.015702 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe-shm.mount: Deactivated successfully. Apr 21 10:22:54.490661 containerd[1975]: 2026-04-21 10:22:53.961 [INFO][4497] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Apr 21 10:22:54.490661 containerd[1975]: 2026-04-21 10:22:53.962 [INFO][4497] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" iface="eth0" netns="/var/run/netns/cni-2f18f5e6-52a2-7159-4a6f-cf343c07ae9a" Apr 21 10:22:54.490661 containerd[1975]: 2026-04-21 10:22:53.964 [INFO][4497] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" iface="eth0" netns="/var/run/netns/cni-2f18f5e6-52a2-7159-4a6f-cf343c07ae9a" Apr 21 10:22:54.490661 containerd[1975]: 2026-04-21 10:22:53.978 [INFO][4497] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" iface="eth0" netns="/var/run/netns/cni-2f18f5e6-52a2-7159-4a6f-cf343c07ae9a" Apr 21 10:22:54.490661 containerd[1975]: 2026-04-21 10:22:53.978 [INFO][4497] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Apr 21 10:22:54.490661 containerd[1975]: 2026-04-21 10:22:53.978 [INFO][4497] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Apr 21 10:22:54.490661 containerd[1975]: 2026-04-21 10:22:54.450 [INFO][4612] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" HandleID="k8s-pod-network.b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:22:54.490661 containerd[1975]: 2026-04-21 10:22:54.450 [INFO][4612] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:54.490661 containerd[1975]: 2026-04-21 10:22:54.450 [INFO][4612] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:54.490661 containerd[1975]: 2026-04-21 10:22:54.475 [WARNING][4612] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" HandleID="k8s-pod-network.b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:22:54.490661 containerd[1975]: 2026-04-21 10:22:54.475 [INFO][4612] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" HandleID="k8s-pod-network.b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:22:54.490661 containerd[1975]: 2026-04-21 10:22:54.480 [INFO][4612] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:54.490661 containerd[1975]: 2026-04-21 10:22:54.487 [INFO][4497] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Apr 21 10:22:54.494834 containerd[1975]: time="2026-04-21T10:22:54.494602822Z" level=info msg="TearDown network for sandbox \"b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5\" successfully" Apr 21 10:22:54.494834 containerd[1975]: time="2026-04-21T10:22:54.494641973Z" level=info msg="StopPodSandbox for \"b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5\" returns successfully" Apr 21 10:22:54.497181 containerd[1975]: time="2026-04-21T10:22:54.496988536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54c9c74cc-nh8pr,Uid:18a66424-432c-43b2-9b85-3b805c5d2979,Namespace:calico-system,Attempt:1,}" Apr 21 10:22:54.500194 systemd[1]: run-netns-cni\x2d2f18f5e6\x2d52a2\x2d7159\x2d4a6f\x2dcf343c07ae9a.mount: Deactivated successfully. Apr 21 10:22:54.509312 containerd[1975]: 2026-04-21 10:22:54.188 [INFO][4584] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Apr 21 10:22:54.509312 containerd[1975]: 2026-04-21 10:22:54.188 [INFO][4584] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" iface="eth0" netns="/var/run/netns/cni-e0758095-46fa-f86d-0197-b1d45380b6a0" Apr 21 10:22:54.509312 containerd[1975]: 2026-04-21 10:22:54.189 [INFO][4584] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" iface="eth0" netns="/var/run/netns/cni-e0758095-46fa-f86d-0197-b1d45380b6a0" Apr 21 10:22:54.509312 containerd[1975]: 2026-04-21 10:22:54.189 [INFO][4584] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" iface="eth0" netns="/var/run/netns/cni-e0758095-46fa-f86d-0197-b1d45380b6a0" Apr 21 10:22:54.509312 containerd[1975]: 2026-04-21 10:22:54.189 [INFO][4584] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Apr 21 10:22:54.509312 containerd[1975]: 2026-04-21 10:22:54.189 [INFO][4584] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Apr 21 10:22:54.509312 containerd[1975]: 2026-04-21 10:22:54.450 [INFO][4660] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" HandleID="k8s-pod-network.bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0" Apr 21 10:22:54.509312 containerd[1975]: 2026-04-21 10:22:54.451 [INFO][4660] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:54.509312 containerd[1975]: 2026-04-21 10:22:54.480 [INFO][4660] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:54.509312 containerd[1975]: 2026-04-21 10:22:54.491 [WARNING][4660] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" HandleID="k8s-pod-network.bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0" Apr 21 10:22:54.509312 containerd[1975]: 2026-04-21 10:22:54.491 [INFO][4660] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" HandleID="k8s-pod-network.bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0" Apr 21 10:22:54.509312 containerd[1975]: 2026-04-21 10:22:54.498 [INFO][4660] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:54.509312 containerd[1975]: 2026-04-21 10:22:54.502 [INFO][4584] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Apr 21 10:22:54.517540 containerd[1975]: time="2026-04-21T10:22:54.516598543Z" level=info msg="TearDown network for sandbox \"bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be\" successfully" Apr 21 10:22:54.517540 containerd[1975]: time="2026-04-21T10:22:54.516643938Z" level=info msg="StopPodSandbox for \"bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be\" returns successfully" Apr 21 10:22:54.519441 containerd[1975]: time="2026-04-21T10:22:54.519089176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q9mxk,Uid:4fb7221c-ea69-4ca4-82b5-711eb8fdfc35,Namespace:kube-system,Attempt:1,}" Apr 21 10:22:54.519332 systemd[1]: run-netns-cni\x2de0758095\x2d46fa\x2df86d\x2d0197\x2db1d45380b6a0.mount: Deactivated successfully. Apr 21 10:22:54.529984 containerd[1975]: 2026-04-21 10:22:54.170 [INFO][4580] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Apr 21 10:22:54.529984 containerd[1975]: 2026-04-21 10:22:54.174 [INFO][4580] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" iface="eth0" netns="/var/run/netns/cni-ca0539cc-9235-8e13-0760-7164ab558e51" Apr 21 10:22:54.529984 containerd[1975]: 2026-04-21 10:22:54.175 [INFO][4580] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" iface="eth0" netns="/var/run/netns/cni-ca0539cc-9235-8e13-0760-7164ab558e51" Apr 21 10:22:54.529984 containerd[1975]: 2026-04-21 10:22:54.176 [INFO][4580] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" iface="eth0" netns="/var/run/netns/cni-ca0539cc-9235-8e13-0760-7164ab558e51" Apr 21 10:22:54.529984 containerd[1975]: 2026-04-21 10:22:54.176 [INFO][4580] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Apr 21 10:22:54.529984 containerd[1975]: 2026-04-21 10:22:54.176 [INFO][4580] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Apr 21 10:22:54.529984 containerd[1975]: 2026-04-21 10:22:54.450 [INFO][4653] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" HandleID="k8s-pod-network.6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0" Apr 21 10:22:54.529984 containerd[1975]: 2026-04-21 10:22:54.462 [INFO][4653] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:54.529984 containerd[1975]: 2026-04-21 10:22:54.498 [INFO][4653] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:54.529984 containerd[1975]: 2026-04-21 10:22:54.510 [WARNING][4653] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" HandleID="k8s-pod-network.6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0" Apr 21 10:22:54.529984 containerd[1975]: 2026-04-21 10:22:54.510 [INFO][4653] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" HandleID="k8s-pod-network.6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0" Apr 21 10:22:54.529984 containerd[1975]: 2026-04-21 10:22:54.523 [INFO][4653] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:54.529984 containerd[1975]: 2026-04-21 10:22:54.527 [INFO][4580] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Apr 21 10:22:54.537971 containerd[1975]: time="2026-04-21T10:22:54.530597096Z" level=info msg="TearDown network for sandbox \"6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade\" successfully" Apr 21 10:22:54.537971 containerd[1975]: time="2026-04-21T10:22:54.532809247Z" level=info msg="StopPodSandbox for \"6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade\" returns successfully" Apr 21 10:22:54.541806 systemd[1]: run-netns-cni\x2dca0539cc\x2d9235\x2d8e13\x2d0760\x2d7164ab558e51.mount: Deactivated successfully. Apr 21 10:22:54.546984 containerd[1975]: time="2026-04-21T10:22:54.546946851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4964dbc6-vq6mv,Uid:d0515f99-cc48-4126-aab8-41d534ccbd0f,Namespace:calico-system,Attempt:1,}" Apr 21 10:22:54.572491 containerd[1975]: 2026-04-21 10:22:54.009 [INFO][4493] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Apr 21 10:22:54.572491 containerd[1975]: 2026-04-21 10:22:54.010 [INFO][4493] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" iface="eth0" netns="/var/run/netns/cni-9ff5a9de-0b0f-35f1-a050-753e076b2004" Apr 21 10:22:54.572491 containerd[1975]: 2026-04-21 10:22:54.012 [INFO][4493] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" iface="eth0" netns="/var/run/netns/cni-9ff5a9de-0b0f-35f1-a050-753e076b2004" Apr 21 10:22:54.572491 containerd[1975]: 2026-04-21 10:22:54.025 [INFO][4493] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" iface="eth0" netns="/var/run/netns/cni-9ff5a9de-0b0f-35f1-a050-753e076b2004" Apr 21 10:22:54.572491 containerd[1975]: 2026-04-21 10:22:54.025 [INFO][4493] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Apr 21 10:22:54.572491 containerd[1975]: 2026-04-21 10:22:54.025 [INFO][4493] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Apr 21 10:22:54.572491 containerd[1975]: 2026-04-21 10:22:54.463 [INFO][4630] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" HandleID="k8s-pod-network.95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Workload="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0" Apr 21 10:22:54.572491 containerd[1975]: 2026-04-21 10:22:54.463 [INFO][4630] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:54.572491 containerd[1975]: 2026-04-21 10:22:54.524 [INFO][4630] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:54.572491 containerd[1975]: 2026-04-21 10:22:54.537 [WARNING][4630] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" HandleID="k8s-pod-network.95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Workload="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0" Apr 21 10:22:54.572491 containerd[1975]: 2026-04-21 10:22:54.537 [INFO][4630] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" HandleID="k8s-pod-network.95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Workload="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0" Apr 21 10:22:54.572491 containerd[1975]: 2026-04-21 10:22:54.545 [INFO][4630] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:54.572491 containerd[1975]: 2026-04-21 10:22:54.554 [INFO][4493] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Apr 21 10:22:54.574175 containerd[1975]: time="2026-04-21T10:22:54.574104379Z" level=info msg="TearDown network for sandbox \"95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a\" successfully" Apr 21 10:22:54.574436 containerd[1975]: time="2026-04-21T10:22:54.574270470Z" level=info msg="StopPodSandbox for \"95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a\" returns successfully" Apr 21 10:22:54.577685 containerd[1975]: time="2026-04-21T10:22:54.577649580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-hsqq5,Uid:126f393b-3d88-44db-b88f-944f8fffc842,Namespace:calico-system,Attempt:1,}" Apr 21 10:22:54.595247 containerd[1975]: 2026-04-21 10:22:54.176 [INFO][4577] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Apr 21 10:22:54.595247 containerd[1975]: 2026-04-21 10:22:54.177 [INFO][4577] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" iface="eth0" netns="/var/run/netns/cni-c6a030c4-8d16-128a-5395-99c89c693744" Apr 21 10:22:54.595247 containerd[1975]: 2026-04-21 10:22:54.177 [INFO][4577] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" iface="eth0" netns="/var/run/netns/cni-c6a030c4-8d16-128a-5395-99c89c693744" Apr 21 10:22:54.595247 containerd[1975]: 2026-04-21 10:22:54.178 [INFO][4577] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" iface="eth0" netns="/var/run/netns/cni-c6a030c4-8d16-128a-5395-99c89c693744" Apr 21 10:22:54.595247 containerd[1975]: 2026-04-21 10:22:54.178 [INFO][4577] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Apr 21 10:22:54.595247 containerd[1975]: 2026-04-21 10:22:54.178 [INFO][4577] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Apr 21 10:22:54.595247 containerd[1975]: 2026-04-21 10:22:54.459 [INFO][4654] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" HandleID="k8s-pod-network.d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0" Apr 21 10:22:54.595247 containerd[1975]: 2026-04-21 10:22:54.464 [INFO][4654] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:54.595247 containerd[1975]: 2026-04-21 10:22:54.540 [INFO][4654] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:54.595247 containerd[1975]: 2026-04-21 10:22:54.565 [WARNING][4654] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" HandleID="k8s-pod-network.d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0" Apr 21 10:22:54.595247 containerd[1975]: 2026-04-21 10:22:54.565 [INFO][4654] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" HandleID="k8s-pod-network.d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0" Apr 21 10:22:54.595247 containerd[1975]: 2026-04-21 10:22:54.569 [INFO][4654] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:54.595247 containerd[1975]: 2026-04-21 10:22:54.585 [INFO][4577] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Apr 21 10:22:54.598129 containerd[1975]: time="2026-04-21T10:22:54.597044994Z" level=info msg="TearDown network for sandbox \"d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1\" successfully" Apr 21 10:22:54.598129 containerd[1975]: time="2026-04-21T10:22:54.597087538Z" level=info msg="StopPodSandbox for \"d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1\" returns successfully" Apr 21 10:22:54.598890 containerd[1975]: time="2026-04-21T10:22:54.598799713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4964dbc6-crhn4,Uid:36e30077-4d0e-4cfb-85c1-e2be8e459364,Namespace:calico-system,Attempt:1,}" Apr 21 10:22:54.626585 containerd[1975]: 2026-04-21 10:22:54.043 [INFO][4538] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Apr 21 10:22:54.626585 containerd[1975]: 2026-04-21 10:22:54.045 [INFO][4538] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" iface="eth0" netns="/var/run/netns/cni-14fb9350-7a83-6c46-ce11-4f9ea88f1615" Apr 21 10:22:54.626585 containerd[1975]: 2026-04-21 10:22:54.046 [INFO][4538] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" iface="eth0" netns="/var/run/netns/cni-14fb9350-7a83-6c46-ce11-4f9ea88f1615" Apr 21 10:22:54.626585 containerd[1975]: 2026-04-21 10:22:54.046 [INFO][4538] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" iface="eth0" netns="/var/run/netns/cni-14fb9350-7a83-6c46-ce11-4f9ea88f1615" Apr 21 10:22:54.626585 containerd[1975]: 2026-04-21 10:22:54.046 [INFO][4538] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Apr 21 10:22:54.626585 containerd[1975]: 2026-04-21 10:22:54.048 [INFO][4538] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Apr 21 10:22:54.626585 containerd[1975]: 2026-04-21 10:22:54.459 [INFO][4636] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" HandleID="k8s-pod-network.40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0" Apr 21 10:22:54.626585 containerd[1975]: 2026-04-21 10:22:54.465 [INFO][4636] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:54.626585 containerd[1975]: 2026-04-21 10:22:54.591 [INFO][4636] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:54.626585 containerd[1975]: 2026-04-21 10:22:54.605 [WARNING][4636] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" HandleID="k8s-pod-network.40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0" Apr 21 10:22:54.626585 containerd[1975]: 2026-04-21 10:22:54.605 [INFO][4636] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" HandleID="k8s-pod-network.40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0" Apr 21 10:22:54.626585 containerd[1975]: 2026-04-21 10:22:54.608 [INFO][4636] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:54.626585 containerd[1975]: 2026-04-21 10:22:54.610 [INFO][4538] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Apr 21 10:22:54.629652 containerd[1975]: time="2026-04-21T10:22:54.629125022Z" level=info msg="TearDown network for sandbox \"40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe\" successfully" Apr 21 10:22:54.629652 containerd[1975]: time="2026-04-21T10:22:54.629161135Z" level=info msg="StopPodSandbox for \"40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe\" returns successfully" Apr 21 10:22:54.630000 containerd[1975]: 2026-04-21 10:22:54.046 [INFO][4526] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Apr 21 10:22:54.630000 containerd[1975]: 2026-04-21 10:22:54.047 [INFO][4526] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" iface="eth0" netns="/var/run/netns/cni-c741d5b4-2f42-8846-7fb4-0c97fb5141b8" Apr 21 10:22:54.630000 containerd[1975]: 2026-04-21 10:22:54.052 [INFO][4526] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" iface="eth0" netns="/var/run/netns/cni-c741d5b4-2f42-8846-7fb4-0c97fb5141b8" Apr 21 10:22:54.630000 containerd[1975]: 2026-04-21 10:22:54.052 [INFO][4526] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" iface="eth0" netns="/var/run/netns/cni-c741d5b4-2f42-8846-7fb4-0c97fb5141b8" Apr 21 10:22:54.630000 containerd[1975]: 2026-04-21 10:22:54.052 [INFO][4526] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Apr 21 10:22:54.630000 containerd[1975]: 2026-04-21 10:22:54.052 [INFO][4526] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Apr 21 10:22:54.630000 containerd[1975]: 2026-04-21 10:22:54.453 [INFO][4638] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" HandleID="k8s-pod-network.1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Workload="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0" Apr 21 10:22:54.630000 containerd[1975]: 2026-04-21 10:22:54.464 [INFO][4638] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:54.630000 containerd[1975]: 2026-04-21 10:22:54.568 [INFO][4638] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:54.630000 containerd[1975]: 2026-04-21 10:22:54.587 [WARNING][4638] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" HandleID="k8s-pod-network.1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Workload="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0" Apr 21 10:22:54.630000 containerd[1975]: 2026-04-21 10:22:54.588 [INFO][4638] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" HandleID="k8s-pod-network.1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Workload="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0" Apr 21 10:22:54.630000 containerd[1975]: 2026-04-21 10:22:54.590 [INFO][4638] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:54.630000 containerd[1975]: 2026-04-21 10:22:54.607 [INFO][4526] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Apr 21 10:22:54.631318 containerd[1975]: time="2026-04-21T10:22:54.630634842Z" level=info msg="TearDown network for sandbox \"1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c\" successfully" Apr 21 10:22:54.631318 containerd[1975]: time="2026-04-21T10:22:54.630849156Z" level=info msg="StopPodSandbox for \"1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c\" returns successfully" Apr 21 10:22:54.632132 containerd[1975]: time="2026-04-21T10:22:54.632104581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kq922,Uid:e0245abf-1cb1-48b9-b736-006cc52f0a7d,Namespace:kube-system,Attempt:1,}" Apr 21 10:22:54.636294 containerd[1975]: time="2026-04-21T10:22:54.636243159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kwvj,Uid:0da02c82-49c9-40d9-881a-313b594008da,Namespace:calico-system,Attempt:1,}" Apr 21 10:22:54.672954 containerd[1975]: 2026-04-21 10:22:53.972 [INFO][4562] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Apr 21 10:22:54.672954 containerd[1975]: 2026-04-21 10:22:53.973 [INFO][4562] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" iface="eth0" netns="/var/run/netns/cni-20ff4b7f-8e9a-bc3a-ee24-3657f34ecd35" Apr 21 10:22:54.672954 containerd[1975]: 2026-04-21 10:22:53.978 [INFO][4562] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" iface="eth0" netns="/var/run/netns/cni-20ff4b7f-8e9a-bc3a-ee24-3657f34ecd35" Apr 21 10:22:54.672954 containerd[1975]: 2026-04-21 10:22:53.979 [INFO][4562] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" iface="eth0" netns="/var/run/netns/cni-20ff4b7f-8e9a-bc3a-ee24-3657f34ecd35" Apr 21 10:22:54.672954 containerd[1975]: 2026-04-21 10:22:53.979 [INFO][4562] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Apr 21 10:22:54.672954 containerd[1975]: 2026-04-21 10:22:53.979 [INFO][4562] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Apr 21 10:22:54.672954 containerd[1975]: 2026-04-21 10:22:54.471 [INFO][4616] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" HandleID="k8s-pod-network.01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Workload="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0" Apr 21 10:22:54.672954 containerd[1975]: 2026-04-21 10:22:54.473 [INFO][4616] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:54.672954 containerd[1975]: 2026-04-21 10:22:54.608 [INFO][4616] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:54.672954 containerd[1975]: 2026-04-21 10:22:54.637 [WARNING][4616] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" HandleID="k8s-pod-network.01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Workload="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0" Apr 21 10:22:54.672954 containerd[1975]: 2026-04-21 10:22:54.637 [INFO][4616] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" HandleID="k8s-pod-network.01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Workload="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0" Apr 21 10:22:54.672954 containerd[1975]: 2026-04-21 10:22:54.649 [INFO][4616] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:54.672954 containerd[1975]: 2026-04-21 10:22:54.663 [INFO][4562] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Apr 21 10:22:54.677000 containerd[1975]: time="2026-04-21T10:22:54.673004074Z" level=info msg="TearDown network for sandbox \"01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed\" successfully" Apr 21 10:22:54.677000 containerd[1975]: time="2026-04-21T10:22:54.673045511Z" level=info msg="StopPodSandbox for \"01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed\" returns successfully" Apr 21 10:22:54.677000 containerd[1975]: time="2026-04-21T10:22:54.674826406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-687db6948-llqks,Uid:83fea0e3-1b93-4c76-acf8-8d0eb96c26b9,Namespace:calico-system,Attempt:1,}" Apr 21 10:22:55.046229 systemd[1]: run-netns-cni\x2dc741d5b4\x2d2f42\x2d8846\x2d7fb4\x2d0c97fb5141b8.mount: Deactivated successfully. Apr 21 10:22:55.049720 systemd[1]: run-netns-cni\x2dc6a030c4\x2d8d16\x2d128a\x2d5395\x2d99c89c693744.mount: Deactivated successfully. Apr 21 10:22:55.049843 systemd[1]: run-netns-cni\x2d20ff4b7f\x2d8e9a\x2dbc3a\x2dee24\x2d3657f34ecd35.mount: Deactivated successfully. Apr 21 10:22:55.049926 systemd[1]: run-netns-cni\x2d9ff5a9de\x2d0b0f\x2d35f1\x2da050\x2d753e076b2004.mount: Deactivated successfully. Apr 21 10:22:55.050003 systemd[1]: run-netns-cni\x2d14fb9350\x2d7a83\x2d6c46\x2dce11\x2d4f9ea88f1615.mount: Deactivated successfully. Apr 21 10:22:55.323009 systemd-networkd[1897]: cali076c28a381c: Link UP Apr 21 10:22:55.324693 systemd-networkd[1897]: cali076c28a381c: Gained carrier Apr 21 10:22:55.350695 (udev-worker)[4865]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:54.749 [ERROR][4709] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:54.796 [INFO][4709] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0 calico-apiserver-7b4964dbc6- calico-system d0515f99-cc48-4126-aab8-41d534ccbd0f 921 0 2026-04-21 10:22:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b4964dbc6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-37 calico-apiserver-7b4964dbc6-vq6mv eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali076c28a381c [] [] }} ContainerID="5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe" Namespace="calico-system" Pod="calico-apiserver-7b4964dbc6-vq6mv" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-" Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:54.796 [INFO][4709] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe" Namespace="calico-system" Pod="calico-apiserver-7b4964dbc6-vq6mv" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0" Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:54.962 [INFO][4790] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe" HandleID="k8s-pod-network.5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0" Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:55.020 [INFO][4790] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe" HandleID="k8s-pod-network.5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000622350), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-37", "pod":"calico-apiserver-7b4964dbc6-vq6mv", "timestamp":"2026-04-21 10:22:54.962436096 +0000 UTC"}, Hostname:"ip-172-31-24-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000244000)} Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:55.020 [INFO][4790] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:55.020 [INFO][4790] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:55.020 [INFO][4790] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-37' Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:55.028 [INFO][4790] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe" host="ip-172-31-24-37" Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:55.061 [INFO][4790] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-37" Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:55.093 [INFO][4790] ipam/ipam.go 526: Trying affinity for 192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:55.103 [INFO][4790] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:55.113 [INFO][4790] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:55.113 [INFO][4790] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe" host="ip-172-31-24-37" Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:55.122 [INFO][4790] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:55.161 [INFO][4790] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe" host="ip-172-31-24-37" Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:55.212 [INFO][4790] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.1/26] block=192.168.51.0/26 handle="k8s-pod-network.5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe" host="ip-172-31-24-37" Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:55.212 [INFO][4790] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.1/26] handle="k8s-pod-network.5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe" host="ip-172-31-24-37" Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:55.217 [INFO][4790] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:55.432663 containerd[1975]: 2026-04-21 10:22:55.217 [INFO][4790] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.1/26] IPv6=[] ContainerID="5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe" HandleID="k8s-pod-network.5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0" Apr 21 10:22:55.433774 containerd[1975]: 2026-04-21 10:22:55.255 [INFO][4709] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe" Namespace="calico-system" Pod="calico-apiserver-7b4964dbc6-vq6mv" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0", GenerateName:"calico-apiserver-7b4964dbc6-", Namespace:"calico-system", SelfLink:"", UID:"d0515f99-cc48-4126-aab8-41d534ccbd0f", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4964dbc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"", Pod:"calico-apiserver-7b4964dbc6-vq6mv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali076c28a381c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:55.433774 containerd[1975]: 2026-04-21 10:22:55.269 [INFO][4709] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.1/32] ContainerID="5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe" Namespace="calico-system" Pod="calico-apiserver-7b4964dbc6-vq6mv" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0" Apr 21 10:22:55.433774 containerd[1975]: 2026-04-21 10:22:55.269 [INFO][4709] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali076c28a381c ContainerID="5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe" Namespace="calico-system" Pod="calico-apiserver-7b4964dbc6-vq6mv" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0" Apr 21 10:22:55.433774 containerd[1975]: 2026-04-21 10:22:55.328 [INFO][4709] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe" Namespace="calico-system" Pod="calico-apiserver-7b4964dbc6-vq6mv" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0" Apr 21 10:22:55.433774 containerd[1975]: 2026-04-21 10:22:55.333 [INFO][4709] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe" Namespace="calico-system" Pod="calico-apiserver-7b4964dbc6-vq6mv" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0", GenerateName:"calico-apiserver-7b4964dbc6-", Namespace:"calico-system", SelfLink:"", UID:"d0515f99-cc48-4126-aab8-41d534ccbd0f", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4964dbc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe", Pod:"calico-apiserver-7b4964dbc6-vq6mv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali076c28a381c", MAC:"ce:91:0d:0b:41:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:55.433774 containerd[1975]: 2026-04-21 10:22:55.375 [INFO][4709] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe" Namespace="calico-system" Pod="calico-apiserver-7b4964dbc6-vq6mv" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0" Apr 21 10:22:55.508374 systemd-networkd[1897]: cali2e9ca1d4f02: Link UP Apr 21 10:22:55.511361 systemd-networkd[1897]: cali2e9ca1d4f02: Gained carrier Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:54.688 [ERROR][4686] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:54.734 [INFO][4686] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0 whisker-54c9c74cc- calico-system 18a66424-432c-43b2-9b85-3b805c5d2979 916 0 2026-04-21 10:22:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54c9c74cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-24-37 whisker-54c9c74cc-nh8pr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali2e9ca1d4f02 [] [] }} ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Namespace="calico-system" Pod="whisker-54c9c74cc-nh8pr" WorkloadEndpoint="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-" Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:54.734 [INFO][4686] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Namespace="calico-system" Pod="whisker-54c9c74cc-nh8pr" WorkloadEndpoint="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:55.069 [INFO][4760] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" HandleID="k8s-pod-network.c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:55.146 [INFO][4760] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" HandleID="k8s-pod-network.c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001226a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-37", "pod":"whisker-54c9c74cc-nh8pr", "timestamp":"2026-04-21 10:22:55.069147852 +0000 UTC"}, Hostname:"ip-172-31-24-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000220420)} Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:55.146 [INFO][4760] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:55.213 [INFO][4760] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:55.213 [INFO][4760] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-37' Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:55.241 [INFO][4760] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" host="ip-172-31-24-37" Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:55.268 [INFO][4760] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-37" Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:55.299 [INFO][4760] ipam/ipam.go 526: Trying affinity for 192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:55.328 [INFO][4760] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:55.353 [INFO][4760] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:55.355 [INFO][4760] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" host="ip-172-31-24-37" Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:55.362 [INFO][4760] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:55.405 [INFO][4760] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" host="ip-172-31-24-37" Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:55.468 [INFO][4760] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.2/26] block=192.168.51.0/26 handle="k8s-pod-network.c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" host="ip-172-31-24-37" Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:55.468 [INFO][4760] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.2/26] handle="k8s-pod-network.c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" host="ip-172-31-24-37" Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:55.469 [INFO][4760] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:55.610900 containerd[1975]: 2026-04-21 10:22:55.473 [INFO][4760] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.2/26] IPv6=[] ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" HandleID="k8s-pod-network.c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:22:55.613045 containerd[1975]: 2026-04-21 10:22:55.504 [INFO][4686] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Namespace="calico-system" Pod="whisker-54c9c74cc-nh8pr" WorkloadEndpoint="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0", GenerateName:"whisker-54c9c74cc-", Namespace:"calico-system", SelfLink:"", UID:"18a66424-432c-43b2-9b85-3b805c5d2979", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54c9c74cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"", Pod:"whisker-54c9c74cc-nh8pr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2e9ca1d4f02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:55.613045 containerd[1975]: 2026-04-21 10:22:55.504 [INFO][4686] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.2/32] ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Namespace="calico-system" Pod="whisker-54c9c74cc-nh8pr" WorkloadEndpoint="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:22:55.613045 containerd[1975]: 2026-04-21 10:22:55.504 [INFO][4686] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e9ca1d4f02 ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Namespace="calico-system" Pod="whisker-54c9c74cc-nh8pr" WorkloadEndpoint="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:22:55.613045 containerd[1975]: 2026-04-21 10:22:55.509 [INFO][4686] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Namespace="calico-system" Pod="whisker-54c9c74cc-nh8pr" WorkloadEndpoint="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:22:55.613045 containerd[1975]: 2026-04-21 10:22:55.509 [INFO][4686] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Namespace="calico-system" Pod="whisker-54c9c74cc-nh8pr" WorkloadEndpoint="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0", GenerateName:"whisker-54c9c74cc-", Namespace:"calico-system", SelfLink:"", UID:"18a66424-432c-43b2-9b85-3b805c5d2979", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54c9c74cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f", Pod:"whisker-54c9c74cc-nh8pr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2e9ca1d4f02", MAC:"16:c5:bf:dd:b0:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:55.613045 containerd[1975]: 2026-04-21 10:22:55.594 [INFO][4686] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Namespace="calico-system" Pod="whisker-54c9c74cc-nh8pr" WorkloadEndpoint="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:22:55.647546 containerd[1975]: time="2026-04-21T10:22:55.645818249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:55.647546 containerd[1975]: time="2026-04-21T10:22:55.645907272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:55.647546 containerd[1975]: time="2026-04-21T10:22:55.645927910Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:55.647546 containerd[1975]: time="2026-04-21T10:22:55.646050222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:55.723810 containerd[1975]: time="2026-04-21T10:22:55.719862854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:55.723810 containerd[1975]: time="2026-04-21T10:22:55.719954684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:55.723810 containerd[1975]: time="2026-04-21T10:22:55.719980225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:55.723810 containerd[1975]: time="2026-04-21T10:22:55.720106047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:55.743343 systemd-networkd[1897]: cali8a32b2160e7: Link UP Apr 21 10:22:55.746777 systemd-networkd[1897]: cali8a32b2160e7: Gained carrier Apr 21 10:22:55.816750 systemd[1]: Started cri-containerd-5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe.scope - libcontainer container 5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe. Apr 21 10:22:55.827684 systemd[1]: Started cri-containerd-c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f.scope - libcontainer container c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f. Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:54.695 [ERROR][4700] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:54.732 [INFO][4700] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0 coredns-674b8bbfcf- kube-system 4fb7221c-ea69-4ca4-82b5-711eb8fdfc35 923 0 2026-04-21 10:22:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-37 coredns-674b8bbfcf-q9mxk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8a32b2160e7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd" Namespace="kube-system" Pod="coredns-674b8bbfcf-q9mxk" WorkloadEndpoint="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-" Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:54.732 [INFO][4700] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd" Namespace="kube-system" Pod="coredns-674b8bbfcf-q9mxk" WorkloadEndpoint="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0" Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:55.212 [INFO][4738] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd" HandleID="k8s-pod-network.3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0" Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:55.262 [INFO][4738] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd" HandleID="k8s-pod-network.3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000199110), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-37", "pod":"coredns-674b8bbfcf-q9mxk", "timestamp":"2026-04-21 10:22:55.212346992 +0000 UTC"}, Hostname:"ip-172-31-24-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003fe6e0)} Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:55.263 [INFO][4738] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:55.479 [INFO][4738] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:55.479 [INFO][4738] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-37' Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:55.521 [INFO][4738] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd" host="ip-172-31-24-37" Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:55.610 [INFO][4738] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-37" Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:55.659 [INFO][4738] ipam/ipam.go 526: Trying affinity for 192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:55.665 [INFO][4738] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:55.667 [INFO][4738] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:55.668 [INFO][4738] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd" host="ip-172-31-24-37" Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:55.670 [INFO][4738] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:55.679 [INFO][4738] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd" host="ip-172-31-24-37" Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:55.691 [INFO][4738] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.3/26] block=192.168.51.0/26 handle="k8s-pod-network.3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd" host="ip-172-31-24-37" Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:55.692 [INFO][4738] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.3/26] handle="k8s-pod-network.3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd" host="ip-172-31-24-37" Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:55.692 [INFO][4738] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:55.839583 containerd[1975]: 2026-04-21 10:22:55.692 [INFO][4738] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.3/26] IPv6=[] ContainerID="3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd" HandleID="k8s-pod-network.3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0" Apr 21 10:22:55.841755 containerd[1975]: 2026-04-21 10:22:55.706 [INFO][4700] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd" Namespace="kube-system" Pod="coredns-674b8bbfcf-q9mxk" WorkloadEndpoint="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4fb7221c-ea69-4ca4-82b5-711eb8fdfc35", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"", Pod:"coredns-674b8bbfcf-q9mxk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a32b2160e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:55.841755 containerd[1975]: 2026-04-21 10:22:55.707 [INFO][4700] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.3/32] ContainerID="3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd" Namespace="kube-system" Pod="coredns-674b8bbfcf-q9mxk" WorkloadEndpoint="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0" Apr 21 10:22:55.841755 containerd[1975]: 2026-04-21 10:22:55.708 [INFO][4700] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a32b2160e7 ContainerID="3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd" Namespace="kube-system" Pod="coredns-674b8bbfcf-q9mxk" WorkloadEndpoint="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0" Apr 21 10:22:55.841755 containerd[1975]: 2026-04-21 10:22:55.754 [INFO][4700] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd" Namespace="kube-system" Pod="coredns-674b8bbfcf-q9mxk" WorkloadEndpoint="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0" Apr 21 10:22:55.841755 containerd[1975]: 2026-04-21 10:22:55.756 [INFO][4700] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd" Namespace="kube-system" Pod="coredns-674b8bbfcf-q9mxk" WorkloadEndpoint="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4fb7221c-ea69-4ca4-82b5-711eb8fdfc35", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd", Pod:"coredns-674b8bbfcf-q9mxk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a32b2160e7", MAC:"02:e1:93:b8:80:7b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:55.841755 containerd[1975]: 2026-04-21 10:22:55.835 [INFO][4700] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd" Namespace="kube-system" Pod="coredns-674b8bbfcf-q9mxk" WorkloadEndpoint="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0" Apr 21 10:22:55.917263 systemd-networkd[1897]: cali2edb40b16c8: Link UP Apr 21 10:22:55.917617 systemd-networkd[1897]: cali2edb40b16c8: Gained carrier Apr 21 10:22:55.946609 containerd[1975]: time="2026-04-21T10:22:55.946321527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:55.946609 containerd[1975]: time="2026-04-21T10:22:55.946412485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:55.946609 containerd[1975]: time="2026-04-21T10:22:55.946430664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:55.947022 containerd[1975]: time="2026-04-21T10:22:55.946908241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.113 [ERROR][4767] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.179 [INFO][4767] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0 calico-apiserver-7b4964dbc6- calico-system 36e30077-4d0e-4cfb-85c1-e2be8e459364 922 0 2026-04-21 10:22:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7b4964dbc6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-37 calico-apiserver-7b4964dbc6-crhn4 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali2edb40b16c8 [] [] }} ContainerID="513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf" Namespace="calico-system" Pod="calico-apiserver-7b4964dbc6-crhn4" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-" Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.180 [INFO][4767] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf" Namespace="calico-system" Pod="calico-apiserver-7b4964dbc6-crhn4" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0" Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.480 [INFO][4855] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf" HandleID="k8s-pod-network.513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0" Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.587 [INFO][4855] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf" HandleID="k8s-pod-network.513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000f7460), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-37", "pod":"calico-apiserver-7b4964dbc6-crhn4", "timestamp":"2026-04-21 10:22:55.480665036 +0000 UTC"}, Hostname:"ip-172-31-24-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003846e0)} Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.587 [INFO][4855] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.692 [INFO][4855] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.692 [INFO][4855] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-37' Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.697 [INFO][4855] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf" host="ip-172-31-24-37" Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.718 [INFO][4855] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-37" Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.762 [INFO][4855] ipam/ipam.go 526: Trying affinity for 192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.834 [INFO][4855] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.843 [INFO][4855] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.843 [INFO][4855] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf" host="ip-172-31-24-37" Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.847 [INFO][4855] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.860 [INFO][4855] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf" host="ip-172-31-24-37" Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.871 [INFO][4855] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.4/26] block=192.168.51.0/26 handle="k8s-pod-network.513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf" host="ip-172-31-24-37" Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.871 [INFO][4855] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.4/26] handle="k8s-pod-network.513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf" host="ip-172-31-24-37" Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.871 [INFO][4855] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:55.989396 containerd[1975]: 2026-04-21 10:22:55.871 [INFO][4855] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.4/26] IPv6=[] ContainerID="513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf" HandleID="k8s-pod-network.513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0" Apr 21 10:22:55.990516 containerd[1975]: 2026-04-21 10:22:55.878 [INFO][4767] cni-plugin/k8s.go 418: Populated endpoint ContainerID="513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf" Namespace="calico-system" Pod="calico-apiserver-7b4964dbc6-crhn4" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0", GenerateName:"calico-apiserver-7b4964dbc6-", Namespace:"calico-system", SelfLink:"", UID:"36e30077-4d0e-4cfb-85c1-e2be8e459364", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4964dbc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"", Pod:"calico-apiserver-7b4964dbc6-crhn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2edb40b16c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:55.990516 containerd[1975]: 2026-04-21 10:22:55.878 [INFO][4767] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.4/32] ContainerID="513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf" Namespace="calico-system" Pod="calico-apiserver-7b4964dbc6-crhn4" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0" Apr 21 10:22:55.990516 containerd[1975]: 2026-04-21 10:22:55.879 [INFO][4767] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2edb40b16c8 ContainerID="513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf" Namespace="calico-system" Pod="calico-apiserver-7b4964dbc6-crhn4" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0" Apr 21 10:22:55.990516 containerd[1975]: 2026-04-21 10:22:55.924 [INFO][4767] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf" Namespace="calico-system" Pod="calico-apiserver-7b4964dbc6-crhn4" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0" Apr 21 10:22:55.990516 containerd[1975]: 2026-04-21 10:22:55.929 [INFO][4767] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf" Namespace="calico-system" Pod="calico-apiserver-7b4964dbc6-crhn4" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0", GenerateName:"calico-apiserver-7b4964dbc6-", Namespace:"calico-system", SelfLink:"", UID:"36e30077-4d0e-4cfb-85c1-e2be8e459364", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4964dbc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf", Pod:"calico-apiserver-7b4964dbc6-crhn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2edb40b16c8", MAC:"f2:a5:3a:ec:93:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:55.990516 containerd[1975]: 2026-04-21 10:22:55.966 [INFO][4767] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf" Namespace="calico-system" Pod="calico-apiserver-7b4964dbc6-crhn4" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0" Apr 21 10:22:56.049003 systemd-networkd[1897]: calic104c47f4a4: Link UP Apr 21 10:22:56.052736 systemd[1]: Started cri-containerd-3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd.scope - libcontainer container 3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd. Apr 21 10:22:56.057254 systemd-networkd[1897]: calic104c47f4a4: Gained carrier Apr 21 10:22:56.105533 containerd[1975]: time="2026-04-21T10:22:56.105201439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:56.105533 containerd[1975]: time="2026-04-21T10:22:56.105419541Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:56.105533 containerd[1975]: time="2026-04-21T10:22:56.105457757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:56.105920 containerd[1975]: time="2026-04-21T10:22:56.105685460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:54.981 [ERROR][4742] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:55.109 [INFO][4742] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0 coredns-674b8bbfcf- kube-system e0245abf-1cb1-48b9-b736-006cc52f0a7d 919 0 2026-04-21 10:22:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-37 coredns-674b8bbfcf-kq922 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic104c47f4a4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-kq922" WorkloadEndpoint="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-" Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:55.109 [INFO][4742] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-kq922" WorkloadEndpoint="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0" Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:55.540 [INFO][4842] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6" HandleID="k8s-pod-network.9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0" Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:55.601 [INFO][4842] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6" HandleID="k8s-pod-network.9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000347e90), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-37", "pod":"coredns-674b8bbfcf-kq922", "timestamp":"2026-04-21 10:22:55.540346838 +0000 UTC"}, Hostname:"ip-172-31-24-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000335080)} Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:55.601 [INFO][4842] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:55.872 [INFO][4842] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:55.872 [INFO][4842] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-37' Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:55.876 [INFO][4842] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6" host="ip-172-31-24-37" Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:55.915 [INFO][4842] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-37" Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:55.946 [INFO][4842] ipam/ipam.go 526: Trying affinity for 192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:55.948 [INFO][4842] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:55.955 [INFO][4842] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:55.955 [INFO][4842] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6" host="ip-172-31-24-37" Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:55.963 [INFO][4842] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6 Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:55.978 [INFO][4842] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6" host="ip-172-31-24-37" Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:56.005 [INFO][4842] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.5/26] block=192.168.51.0/26 handle="k8s-pod-network.9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6" host="ip-172-31-24-37" Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:56.005 [INFO][4842] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.5/26] handle="k8s-pod-network.9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6" host="ip-172-31-24-37" Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:56.005 [INFO][4842] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:56.137937 containerd[1975]: 2026-04-21 10:22:56.005 [INFO][4842] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.5/26] IPv6=[] ContainerID="9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6" HandleID="k8s-pod-network.9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0" Apr 21 10:22:56.138925 containerd[1975]: 2026-04-21 10:22:56.012 [INFO][4742] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-kq922" WorkloadEndpoint="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e0245abf-1cb1-48b9-b736-006cc52f0a7d", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"", Pod:"coredns-674b8bbfcf-kq922", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic104c47f4a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:56.138925 containerd[1975]: 2026-04-21 10:22:56.013 [INFO][4742] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.5/32] ContainerID="9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-kq922" WorkloadEndpoint="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0" Apr 21 10:22:56.138925 containerd[1975]: 2026-04-21 10:22:56.013 [INFO][4742] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic104c47f4a4 ContainerID="9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-kq922" WorkloadEndpoint="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0" Apr 21 10:22:56.138925 containerd[1975]: 2026-04-21 10:22:56.062 [INFO][4742] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-kq922" WorkloadEndpoint="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0" Apr 21 10:22:56.138925 containerd[1975]: 2026-04-21 10:22:56.071 [INFO][4742] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-kq922" WorkloadEndpoint="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e0245abf-1cb1-48b9-b736-006cc52f0a7d", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6", Pod:"coredns-674b8bbfcf-kq922", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic104c47f4a4", MAC:"66:00:80:5a:1a:57", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:56.138925 containerd[1975]: 2026-04-21 10:22:56.126 [INFO][4742] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6" Namespace="kube-system" Pod="coredns-674b8bbfcf-kq922" WorkloadEndpoint="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0" Apr 21 10:22:56.171070 systemd-networkd[1897]: cali40837c7ba91: Link UP Apr 21 10:22:56.173195 systemd-networkd[1897]: cali40837c7ba91: Gained carrier Apr 21 10:22:56.229774 systemd[1]: Started cri-containerd-513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf.scope - libcontainer container 513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf. Apr 21 10:22:56.247660 containerd[1975]: time="2026-04-21T10:22:56.247616383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-q9mxk,Uid:4fb7221c-ea69-4ca4-82b5-711eb8fdfc35,Namespace:kube-system,Attempt:1,} returns sandbox id \"3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd\"" Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:55.011 [ERROR][4735] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:55.119 [INFO][4735] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0 goldmane-5b85766d88- calico-system 126f393b-3d88-44db-b88f-944f8fffc842 918 0 2026-04-21 10:22:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-24-37 goldmane-5b85766d88-hsqq5 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali40837c7ba91 [] [] }} ContainerID="0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3" Namespace="calico-system" Pod="goldmane-5b85766d88-hsqq5" WorkloadEndpoint="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-" Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:55.119 [INFO][4735] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3" Namespace="calico-system" Pod="goldmane-5b85766d88-hsqq5" WorkloadEndpoint="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0" Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:55.527 [INFO][4844] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3" HandleID="k8s-pod-network.0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3" Workload="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0" Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:55.602 [INFO][4844] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3" HandleID="k8s-pod-network.0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3" Workload="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003654e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-37", "pod":"goldmane-5b85766d88-hsqq5", "timestamp":"2026-04-21 10:22:55.527827048 +0000 UTC"}, Hostname:"ip-172-31-24-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000334160)} Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:55.602 [INFO][4844] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:56.005 [INFO][4844] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:56.005 [INFO][4844] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-37' Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:56.010 [INFO][4844] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3" host="ip-172-31-24-37" Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:56.034 [INFO][4844] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-37" Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:56.078 [INFO][4844] ipam/ipam.go 526: Trying affinity for 192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:56.096 [INFO][4844] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:56.102 [INFO][4844] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:56.102 [INFO][4844] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3" host="ip-172-31-24-37" Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:56.120 [INFO][4844] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3 Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:56.136 [INFO][4844] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3" host="ip-172-31-24-37" Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:56.150 [INFO][4844] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.6/26] block=192.168.51.0/26 handle="k8s-pod-network.0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3" host="ip-172-31-24-37" Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:56.150 [INFO][4844] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.6/26] handle="k8s-pod-network.0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3" host="ip-172-31-24-37" Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:56.150 [INFO][4844] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:56.251958 containerd[1975]: 2026-04-21 10:22:56.150 [INFO][4844] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.6/26] IPv6=[] ContainerID="0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3" HandleID="k8s-pod-network.0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3" Workload="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0" Apr 21 10:22:56.254071 containerd[1975]: 2026-04-21 10:22:56.160 [INFO][4735] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3" Namespace="calico-system" Pod="goldmane-5b85766d88-hsqq5" WorkloadEndpoint="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"126f393b-3d88-44db-b88f-944f8fffc842", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"", Pod:"goldmane-5b85766d88-hsqq5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali40837c7ba91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:56.254071 containerd[1975]: 2026-04-21 10:22:56.160 [INFO][4735] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.6/32] ContainerID="0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3" Namespace="calico-system" Pod="goldmane-5b85766d88-hsqq5" WorkloadEndpoint="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0" Apr 21 10:22:56.254071 containerd[1975]: 2026-04-21 10:22:56.160 [INFO][4735] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali40837c7ba91 ContainerID="0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3" Namespace="calico-system" Pod="goldmane-5b85766d88-hsqq5" WorkloadEndpoint="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0" Apr 21 10:22:56.254071 containerd[1975]: 2026-04-21 10:22:56.178 [INFO][4735] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3" Namespace="calico-system" Pod="goldmane-5b85766d88-hsqq5" WorkloadEndpoint="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0" Apr 21 10:22:56.254071 containerd[1975]: 2026-04-21 10:22:56.180 [INFO][4735] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3" Namespace="calico-system" Pod="goldmane-5b85766d88-hsqq5" WorkloadEndpoint="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"126f393b-3d88-44db-b88f-944f8fffc842", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3", Pod:"goldmane-5b85766d88-hsqq5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali40837c7ba91", MAC:"c6:1a:8d:6f:d2:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:56.254071 containerd[1975]: 2026-04-21 10:22:56.230 [INFO][4735] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3" Namespace="calico-system" Pod="goldmane-5b85766d88-hsqq5" WorkloadEndpoint="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0" Apr 21 10:22:56.264892 containerd[1975]: time="2026-04-21T10:22:56.264540407Z" level=info msg="CreateContainer within sandbox \"3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:22:56.310334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount924206126.mount: Deactivated successfully. Apr 21 10:22:56.313428 containerd[1975]: time="2026-04-21T10:22:56.312423326Z" level=info msg="CreateContainer within sandbox \"3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c745b141ba30ac44ef752e284ca94d489b88e41e456e53475a0f1a9c2c079b40\"" Apr 21 10:22:56.316911 containerd[1975]: time="2026-04-21T10:22:56.316868607Z" level=info msg="StartContainer for \"c745b141ba30ac44ef752e284ca94d489b88e41e456e53475a0f1a9c2c079b40\"" Apr 21 10:22:56.335339 containerd[1975]: time="2026-04-21T10:22:56.334915368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:56.335339 containerd[1975]: time="2026-04-21T10:22:56.335010161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:56.335339 containerd[1975]: time="2026-04-21T10:22:56.335033378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:56.335339 containerd[1975]: time="2026-04-21T10:22:56.335177425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:56.408601 systemd-networkd[1897]: cali51c22aee4f6: Link UP Apr 21 10:22:56.411713 systemd-networkd[1897]: cali51c22aee4f6: Gained carrier Apr 21 10:22:56.490627 containerd[1975]: time="2026-04-21T10:22:56.489471861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54c9c74cc-nh8pr,Uid:18a66424-432c-43b2-9b85-3b805c5d2979,Namespace:calico-system,Attempt:1,} returns sandbox id \"c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f\"" Apr 21 10:22:56.499613 containerd[1975]: time="2026-04-21T10:22:56.498174345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4964dbc6-vq6mv,Uid:d0515f99-cc48-4126-aab8-41d534ccbd0f,Namespace:calico-system,Attempt:1,} returns sandbox id \"5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe\"" Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:54.993 [ERROR][4777] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:55.084 [INFO][4777] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0 csi-node-driver- calico-system 0da02c82-49c9-40d9-881a-313b594008da 920 0 2026-04-21 10:22:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-24-37 csi-node-driver-5kwvj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali51c22aee4f6 [] [] }} ContainerID="eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31" Namespace="calico-system" Pod="csi-node-driver-5kwvj" WorkloadEndpoint="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-" Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:55.084 [INFO][4777] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31" Namespace="calico-system" Pod="csi-node-driver-5kwvj" WorkloadEndpoint="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0" Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:55.569 [INFO][4837] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31" HandleID="k8s-pod-network.eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31" Workload="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0" Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:55.606 [INFO][4837] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31" HandleID="k8s-pod-network.eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31" Workload="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004257d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-37", "pod":"csi-node-driver-5kwvj", "timestamp":"2026-04-21 10:22:55.569443317 +0000 UTC"}, Hostname:"ip-172-31-24-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000305080)} Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:55.606 [INFO][4837] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:56.151 [INFO][4837] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:56.151 [INFO][4837] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-37' Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:56.157 [INFO][4837] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31" host="ip-172-31-24-37" Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:56.176 [INFO][4837] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-37" Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:56.191 [INFO][4837] ipam/ipam.go 526: Trying affinity for 192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:56.219 [INFO][4837] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:56.236 [INFO][4837] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:56.238 [INFO][4837] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31" host="ip-172-31-24-37" Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:56.247 [INFO][4837] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31 Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:56.267 [INFO][4837] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31" host="ip-172-31-24-37" Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:56.290 [INFO][4837] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.7/26] block=192.168.51.0/26 handle="k8s-pod-network.eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31" host="ip-172-31-24-37" Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:56.291 [INFO][4837] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.7/26] handle="k8s-pod-network.eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31" host="ip-172-31-24-37" Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:56.291 [INFO][4837] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:56.507336 containerd[1975]: 2026-04-21 10:22:56.291 [INFO][4837] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.7/26] IPv6=[] ContainerID="eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31" HandleID="k8s-pod-network.eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31" Workload="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0" Apr 21 10:22:56.508392 containerd[1975]: 2026-04-21 10:22:56.337 [INFO][4777] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31" Namespace="calico-system" Pod="csi-node-driver-5kwvj" WorkloadEndpoint="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0da02c82-49c9-40d9-881a-313b594008da", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"", Pod:"csi-node-driver-5kwvj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali51c22aee4f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:56.508392 containerd[1975]: 2026-04-21 10:22:56.343 [INFO][4777] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.7/32] ContainerID="eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31" Namespace="calico-system" Pod="csi-node-driver-5kwvj" WorkloadEndpoint="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0" Apr 21 10:22:56.508392 containerd[1975]: 2026-04-21 10:22:56.348 [INFO][4777] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51c22aee4f6 ContainerID="eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31" Namespace="calico-system" Pod="csi-node-driver-5kwvj" WorkloadEndpoint="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0" Apr 21 10:22:56.508392 containerd[1975]: 2026-04-21 10:22:56.412 [INFO][4777] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31" Namespace="calico-system" Pod="csi-node-driver-5kwvj" WorkloadEndpoint="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0" Apr 21 10:22:56.508392 containerd[1975]: 2026-04-21 10:22:56.420 [INFO][4777] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31" Namespace="calico-system" Pod="csi-node-driver-5kwvj" WorkloadEndpoint="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0da02c82-49c9-40d9-881a-313b594008da", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31", Pod:"csi-node-driver-5kwvj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali51c22aee4f6", MAC:"82:25:0c:9f:43:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:56.508392 containerd[1975]: 2026-04-21 10:22:56.485 [INFO][4777] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31" Namespace="calico-system" Pod="csi-node-driver-5kwvj" WorkloadEndpoint="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0" Apr 21 10:22:56.561789 containerd[1975]: time="2026-04-21T10:22:56.561282121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 21 10:22:56.565794 systemd[1]: Started cri-containerd-9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6.scope - libcontainer container 9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6. Apr 21 10:22:56.585954 containerd[1975]: time="2026-04-21T10:22:56.585843150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:56.586485 containerd[1975]: time="2026-04-21T10:22:56.585932856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:56.586485 containerd[1975]: time="2026-04-21T10:22:56.585955005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:56.586485 containerd[1975]: time="2026-04-21T10:22:56.586070770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:56.672426 containerd[1975]: time="2026-04-21T10:22:56.667119424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:56.672426 containerd[1975]: time="2026-04-21T10:22:56.667189674Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:56.672426 containerd[1975]: time="2026-04-21T10:22:56.667224479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:56.672426 containerd[1975]: time="2026-04-21T10:22:56.667328487Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:56.675046 systemd-networkd[1897]: cali41eda097e0c: Link UP Apr 21 10:22:56.678785 systemd-networkd[1897]: cali41eda097e0c: Gained carrier Apr 21 10:22:56.731768 systemd[1]: Started cri-containerd-0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3.scope - libcontainer container 0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3. Apr 21 10:22:56.772755 systemd[1]: Started cri-containerd-c745b141ba30ac44ef752e284ca94d489b88e41e456e53475a0f1a9c2c079b40.scope - libcontainer container c745b141ba30ac44ef752e284ca94d489b88e41e456e53475a0f1a9c2c079b40. Apr 21 10:22:56.775071 systemd[1]: Started cri-containerd-eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31.scope - libcontainer container eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31. Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:55.370 [ERROR][4803] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:55.489 [INFO][4803] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0 calico-kube-controllers-687db6948- calico-system 83fea0e3-1b93-4c76-acf8-8d0eb96c26b9 917 0 2026-04-21 10:22:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:687db6948 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-24-37 calico-kube-controllers-687db6948-llqks eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali41eda097e0c [] [] }} ContainerID="e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa" Namespace="calico-system" Pod="calico-kube-controllers-687db6948-llqks" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-" Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:55.489 [INFO][4803] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa" Namespace="calico-system" Pod="calico-kube-controllers-687db6948-llqks" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0" Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:55.771 [INFO][4895] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa" HandleID="k8s-pod-network.e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa" Workload="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0" Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:55.837 [INFO][4895] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa" HandleID="k8s-pod-network.e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa" Workload="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d1b80), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-37", "pod":"calico-kube-controllers-687db6948-llqks", "timestamp":"2026-04-21 10:22:55.771078965 +0000 UTC"}, Hostname:"ip-172-31-24-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00048d080)} Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:55.837 [INFO][4895] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:56.298 [INFO][4895] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:56.298 [INFO][4895] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-37' Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:56.311 [INFO][4895] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa" host="ip-172-31-24-37" Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:56.429 [INFO][4895] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-37" Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:56.489 [INFO][4895] ipam/ipam.go 526: Trying affinity for 192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:56.500 [INFO][4895] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:56.511 [INFO][4895] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:56.512 [INFO][4895] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa" host="ip-172-31-24-37" Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:56.530 [INFO][4895] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:56.551 [INFO][4895] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa" host="ip-172-31-24-37" Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:56.597 [INFO][4895] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.8/26] block=192.168.51.0/26 handle="k8s-pod-network.e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa" host="ip-172-31-24-37" Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:56.597 [INFO][4895] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.8/26] handle="k8s-pod-network.e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa" host="ip-172-31-24-37" Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:56.598 [INFO][4895] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:22:56.786171 containerd[1975]: 2026-04-21 10:22:56.598 [INFO][4895] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.8/26] IPv6=[] ContainerID="e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa" HandleID="k8s-pod-network.e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa" Workload="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0" Apr 21 10:22:56.787469 containerd[1975]: 2026-04-21 10:22:56.635 [INFO][4803] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa" Namespace="calico-system" Pod="calico-kube-controllers-687db6948-llqks" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0", GenerateName:"calico-kube-controllers-687db6948-", Namespace:"calico-system", SelfLink:"", UID:"83fea0e3-1b93-4c76-acf8-8d0eb96c26b9", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"687db6948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"", Pod:"calico-kube-controllers-687db6948-llqks", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali41eda097e0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:56.787469 containerd[1975]: 2026-04-21 10:22:56.635 [INFO][4803] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.8/32] ContainerID="e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa" Namespace="calico-system" Pod="calico-kube-controllers-687db6948-llqks" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0" Apr 21 10:22:56.787469 containerd[1975]: 2026-04-21 10:22:56.635 [INFO][4803] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41eda097e0c ContainerID="e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa" Namespace="calico-system" Pod="calico-kube-controllers-687db6948-llqks" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0" Apr 21 10:22:56.787469 containerd[1975]: 2026-04-21 10:22:56.681 [INFO][4803] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa" Namespace="calico-system" Pod="calico-kube-controllers-687db6948-llqks" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0" Apr 21 10:22:56.787469 containerd[1975]: 2026-04-21 10:22:56.684 [INFO][4803] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa" Namespace="calico-system" Pod="calico-kube-controllers-687db6948-llqks" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0", GenerateName:"calico-kube-controllers-687db6948-", Namespace:"calico-system", SelfLink:"", UID:"83fea0e3-1b93-4c76-acf8-8d0eb96c26b9", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"687db6948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa", Pod:"calico-kube-controllers-687db6948-llqks", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali41eda097e0c", MAC:"52:3f:b1:9c:8e:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:22:56.787469 containerd[1975]: 2026-04-21 10:22:56.735 [INFO][4803] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa" Namespace="calico-system" Pod="calico-kube-controllers-687db6948-llqks" WorkloadEndpoint="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0" Apr 21 10:22:56.805940 containerd[1975]: time="2026-04-21T10:22:56.805865355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-kq922,Uid:e0245abf-1cb1-48b9-b736-006cc52f0a7d,Namespace:kube-system,Attempt:1,} returns sandbox id \"9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6\"" Apr 21 10:22:56.839559 containerd[1975]: time="2026-04-21T10:22:56.837237401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b4964dbc6-crhn4,Uid:36e30077-4d0e-4cfb-85c1-e2be8e459364,Namespace:calico-system,Attempt:1,} returns sandbox id \"513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf\"" Apr 21 10:22:56.853998 containerd[1975]: time="2026-04-21T10:22:56.853949633Z" level=info msg="CreateContainer within sandbox \"9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 21 10:22:56.903719 containerd[1975]: time="2026-04-21T10:22:56.902905480Z" level=info msg="CreateContainer within sandbox \"9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"48d5be82149d6873ac45fdbef66da8b5cbfac14b485008824f7885a48ba16666\"" Apr 21 10:22:56.905575 containerd[1975]: time="2026-04-21T10:22:56.905039139Z" level=info msg="StartContainer for \"48d5be82149d6873ac45fdbef66da8b5cbfac14b485008824f7885a48ba16666\"" Apr 21 10:22:56.926848 containerd[1975]: time="2026-04-21T10:22:56.926790912Z" level=info msg="StartContainer for \"c745b141ba30ac44ef752e284ca94d489b88e41e456e53475a0f1a9c2c079b40\" returns successfully" Apr 21 10:22:56.954196 containerd[1975]: time="2026-04-21T10:22:56.949010934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:22:56.954196 containerd[1975]: time="2026-04-21T10:22:56.949080263Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:22:56.954196 containerd[1975]: time="2026-04-21T10:22:56.949122423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:56.954196 containerd[1975]: time="2026-04-21T10:22:56.949245763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:22:56.975808 containerd[1975]: time="2026-04-21T10:22:56.975768599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5kwvj,Uid:0da02c82-49c9-40d9-881a-313b594008da,Namespace:calico-system,Attempt:1,} returns sandbox id \"eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31\"" Apr 21 10:22:57.021754 systemd[1]: Started cri-containerd-e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa.scope - libcontainer container e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa. Apr 21 10:22:57.056675 containerd[1975]: time="2026-04-21T10:22:57.054029218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-hsqq5,Uid:126f393b-3d88-44db-b88f-944f8fffc842,Namespace:calico-system,Attempt:1,} returns sandbox id \"0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3\"" Apr 21 10:22:57.116444 systemd[1]: run-containerd-runc-k8s.io-48d5be82149d6873ac45fdbef66da8b5cbfac14b485008824f7885a48ba16666-runc.fMzOPT.mount: Deactivated successfully. Apr 21 10:22:57.116777 systemd-networkd[1897]: cali2e9ca1d4f02: Gained IPv6LL Apr 21 10:22:57.139215 systemd[1]: Started cri-containerd-48d5be82149d6873ac45fdbef66da8b5cbfac14b485008824f7885a48ba16666.scope - libcontainer container 48d5be82149d6873ac45fdbef66da8b5cbfac14b485008824f7885a48ba16666. Apr 21 10:22:57.219875 containerd[1975]: time="2026-04-21T10:22:57.219512196Z" level=info msg="StartContainer for \"48d5be82149d6873ac45fdbef66da8b5cbfac14b485008824f7885a48ba16666\" returns successfully" Apr 21 10:22:57.235673 systemd-networkd[1897]: cali8a32b2160e7: Gained IPv6LL Apr 21 10:22:57.299734 systemd-networkd[1897]: cali076c28a381c: Gained IPv6LL Apr 21 10:22:57.311538 containerd[1975]: time="2026-04-21T10:22:57.311159929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-687db6948-llqks,Uid:83fea0e3-1b93-4c76-acf8-8d0eb96c26b9,Namespace:calico-system,Attempt:1,} returns sandbox id \"e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa\"" Apr 21 10:22:57.364767 systemd-networkd[1897]: calic104c47f4a4: Gained IPv6LL Apr 21 10:22:57.365148 systemd-networkd[1897]: cali2edb40b16c8: Gained IPv6LL Apr 21 10:22:57.555745 systemd-networkd[1897]: cali51c22aee4f6: Gained IPv6LL Apr 21 10:22:57.618561 kernel: calico-node[5181]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 21 10:22:57.621127 systemd-networkd[1897]: cali40837c7ba91: Gained IPv6LL Apr 21 10:22:58.067794 systemd-networkd[1897]: cali41eda097e0c: Gained IPv6LL Apr 21 10:22:58.303555 kubelet[3393]: I0421 10:22:58.300311 3393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-q9mxk" podStartSLOduration=42.272842949 podStartE2EDuration="42.272842949s" podCreationTimestamp="2026-04-21 10:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:22:58.225949784 +0000 UTC m=+46.202904878" watchObservedRunningTime="2026-04-21 10:22:58.272842949 +0000 UTC m=+46.249798043" Apr 21 10:22:58.323866 kubelet[3393]: I0421 10:22:58.305144 3393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-kq922" podStartSLOduration=42.305120105 podStartE2EDuration="42.305120105s" podCreationTimestamp="2026-04-21 10:22:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:22:58.256872314 +0000 UTC m=+46.233827405" watchObservedRunningTime="2026-04-21 10:22:58.305120105 +0000 UTC m=+46.282075196" Apr 21 10:22:58.676672 systemd-networkd[1897]: vxlan.calico: Link UP Apr 21 10:22:58.676682 systemd-networkd[1897]: vxlan.calico: Gained carrier Apr 21 10:22:58.847273 (udev-worker)[4864]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:22:59.238637 containerd[1975]: time="2026-04-21T10:22:59.238563831Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 21 10:22:59.268896 containerd[1975]: time="2026-04-21T10:22:59.268499372Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.707168666s" Apr 21 10:22:59.268896 containerd[1975]: time="2026-04-21T10:22:59.268577139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 21 10:22:59.296863 containerd[1975]: time="2026-04-21T10:22:59.295106740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:59.298501 containerd[1975]: time="2026-04-21T10:22:59.297310784Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:59.298950 containerd[1975]: time="2026-04-21T10:22:59.298909774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:22:59.322979 containerd[1975]: time="2026-04-21T10:22:59.322936942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:22:59.438504 containerd[1975]: time="2026-04-21T10:22:59.438438991Z" level=info msg="CreateContainer within sandbox \"c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 10:22:59.557566 containerd[1975]: time="2026-04-21T10:22:59.557118144Z" level=info msg="CreateContainer within sandbox \"c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed\"" Apr 21 10:22:59.559647 containerd[1975]: time="2026-04-21T10:22:59.558542502Z" level=info msg="StartContainer for \"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed\"" Apr 21 10:22:59.651150 systemd[1]: run-containerd-runc-k8s.io-71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed-runc.GiT7CJ.mount: Deactivated successfully. Apr 21 10:22:59.668749 systemd[1]: Started cri-containerd-71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed.scope - libcontainer container 71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed. Apr 21 10:22:59.747731 containerd[1975]: time="2026-04-21T10:22:59.747690371Z" level=info msg="StartContainer for \"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed\" returns successfully" Apr 21 10:23:00.628282 systemd-networkd[1897]: vxlan.calico: Gained IPv6LL Apr 21 10:23:02.874261 ntpd[1949]: Listen normally on 8 vxlan.calico 192.168.51.0:123 Apr 21 10:23:02.874396 ntpd[1949]: Listen normally on 9 cali076c28a381c [fe80::ecee:eeff:feee:eeee%4]:123 Apr 21 10:23:02.879383 ntpd[1949]: 21 Apr 10:23:02 ntpd[1949]: Listen normally on 8 vxlan.calico 192.168.51.0:123 Apr 21 10:23:02.879383 ntpd[1949]: 21 Apr 10:23:02 ntpd[1949]: Listen normally on 9 cali076c28a381c [fe80::ecee:eeff:feee:eeee%4]:123 Apr 21 10:23:02.879383 ntpd[1949]: 21 Apr 10:23:02 ntpd[1949]: Listen normally on 10 cali2e9ca1d4f02 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 21 10:23:02.879383 ntpd[1949]: 21 Apr 10:23:02 ntpd[1949]: Listen normally on 11 cali8a32b2160e7 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 21 10:23:02.879383 ntpd[1949]: 21 Apr 10:23:02 ntpd[1949]: Listen normally on 12 cali2edb40b16c8 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 21 10:23:02.879383 ntpd[1949]: 21 Apr 10:23:02 ntpd[1949]: Listen normally on 13 calic104c47f4a4 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 21 10:23:02.879383 ntpd[1949]: 21 Apr 10:23:02 ntpd[1949]: Listen normally on 14 cali40837c7ba91 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 21 10:23:02.879383 ntpd[1949]: 21 Apr 10:23:02 ntpd[1949]: Listen normally on 15 cali51c22aee4f6 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 21 10:23:02.879383 ntpd[1949]: 21 Apr 10:23:02 ntpd[1949]: Listen normally on 16 cali41eda097e0c [fe80::ecee:eeff:feee:eeee%11]:123 Apr 21 10:23:02.879383 ntpd[1949]: 21 Apr 10:23:02 ntpd[1949]: Listen normally on 17 vxlan.calico [fe80::64c6:b4ff:fe39:9b84%12]:123 Apr 21 10:23:02.874449 ntpd[1949]: Listen normally on 10 cali2e9ca1d4f02 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 21 10:23:02.874478 ntpd[1949]: Listen normally on 11 cali8a32b2160e7 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 21 10:23:02.874506 ntpd[1949]: Listen normally on 12 cali2edb40b16c8 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 21 10:23:02.874572 ntpd[1949]: Listen normally on 13 calic104c47f4a4 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 21 10:23:02.874616 ntpd[1949]: Listen normally on 14 cali40837c7ba91 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 21 10:23:02.874658 ntpd[1949]: Listen normally on 15 cali51c22aee4f6 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 21 10:23:02.874705 ntpd[1949]: Listen normally on 16 cali41eda097e0c [fe80::ecee:eeff:feee:eeee%11]:123 Apr 21 10:23:02.874737 ntpd[1949]: Listen normally on 17 vxlan.calico [fe80::64c6:b4ff:fe39:9b84%12]:123 Apr 21 10:23:03.218749 containerd[1975]: time="2026-04-21T10:23:03.218618377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:23:03.221506 containerd[1975]: time="2026-04-21T10:23:03.221276484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 21 10:23:03.225741 containerd[1975]: time="2026-04-21T10:23:03.225668619Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:23:03.229920 containerd[1975]: time="2026-04-21T10:23:03.229846882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:23:03.231180 containerd[1975]: time="2026-04-21T10:23:03.231006121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.907790061s" Apr 21 10:23:03.231180 containerd[1975]: time="2026-04-21T10:23:03.231056640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:23:03.233605 containerd[1975]: time="2026-04-21T10:23:03.233386314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 21 10:23:03.238072 containerd[1975]: time="2026-04-21T10:23:03.237963052Z" level=info msg="CreateContainer within sandbox \"5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:23:03.266847 containerd[1975]: time="2026-04-21T10:23:03.266797979Z" level=info msg="CreateContainer within sandbox \"5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7d036c8cf96dc4f1f70e4304b9f56f7d24ddc602f6851ab1b21fb6f83f83037b\"" Apr 21 10:23:03.268013 containerd[1975]: time="2026-04-21T10:23:03.267902120Z" level=info msg="StartContainer for \"7d036c8cf96dc4f1f70e4304b9f56f7d24ddc602f6851ab1b21fb6f83f83037b\"" Apr 21 10:23:03.315808 systemd[1]: Started cri-containerd-7d036c8cf96dc4f1f70e4304b9f56f7d24ddc602f6851ab1b21fb6f83f83037b.scope - libcontainer container 7d036c8cf96dc4f1f70e4304b9f56f7d24ddc602f6851ab1b21fb6f83f83037b. Apr 21 10:23:03.370067 containerd[1975]: time="2026-04-21T10:23:03.369998509Z" level=info msg="StartContainer for \"7d036c8cf96dc4f1f70e4304b9f56f7d24ddc602f6851ab1b21fb6f83f83037b\" returns successfully" Apr 21 10:23:03.631933 containerd[1975]: time="2026-04-21T10:23:03.631869973Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:23:03.636198 containerd[1975]: time="2026-04-21T10:23:03.636137797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 21 10:23:03.639926 containerd[1975]: time="2026-04-21T10:23:03.639419850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 405.990283ms" Apr 21 10:23:03.639926 containerd[1975]: time="2026-04-21T10:23:03.639568759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 21 10:23:03.641104 containerd[1975]: time="2026-04-21T10:23:03.640893231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 21 10:23:03.648253 containerd[1975]: time="2026-04-21T10:23:03.648210781Z" level=info msg="CreateContainer within sandbox \"513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 21 10:23:03.680051 containerd[1975]: time="2026-04-21T10:23:03.679909142Z" level=info msg="CreateContainer within sandbox \"513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3473328bcccd1a100ae53d4217981b32c621f84c765e0d526d550d9af8edcb97\"" Apr 21 10:23:03.682588 containerd[1975]: time="2026-04-21T10:23:03.682145167Z" level=info msg="StartContainer for \"3473328bcccd1a100ae53d4217981b32c621f84c765e0d526d550d9af8edcb97\"" Apr 21 10:23:03.727786 systemd[1]: Started cri-containerd-3473328bcccd1a100ae53d4217981b32c621f84c765e0d526d550d9af8edcb97.scope - libcontainer container 3473328bcccd1a100ae53d4217981b32c621f84c765e0d526d550d9af8edcb97. Apr 21 10:23:03.796303 containerd[1975]: time="2026-04-21T10:23:03.796053962Z" level=info msg="StartContainer for \"3473328bcccd1a100ae53d4217981b32c621f84c765e0d526d550d9af8edcb97\" returns successfully" Apr 21 10:23:04.103658 kubelet[3393]: I0421 10:23:04.103588 3393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7b4964dbc6-crhn4" podStartSLOduration=29.322144242 podStartE2EDuration="36.103565424s" podCreationTimestamp="2026-04-21 10:22:28 +0000 UTC" firstStartedPulling="2026-04-21 10:22:56.858896245 +0000 UTC m=+44.835851320" lastFinishedPulling="2026-04-21 10:23:03.640317425 +0000 UTC m=+51.617272502" observedRunningTime="2026-04-21 10:23:04.102230232 +0000 UTC m=+52.079185327" watchObservedRunningTime="2026-04-21 10:23:04.103565424 +0000 UTC m=+52.080520539" Apr 21 10:23:05.087615 kubelet[3393]: I0421 10:23:05.087128 3393 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:23:05.670679 containerd[1975]: time="2026-04-21T10:23:05.670620596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:23:05.672961 containerd[1975]: time="2026-04-21T10:23:05.672666364Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 21 10:23:05.675286 containerd[1975]: time="2026-04-21T10:23:05.675243924Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:23:05.682292 containerd[1975]: time="2026-04-21T10:23:05.681276016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:23:05.684827 containerd[1975]: time="2026-04-21T10:23:05.684747940Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.043818408s" Apr 21 10:23:05.686518 containerd[1975]: time="2026-04-21T10:23:05.685744818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 21 10:23:05.688859 containerd[1975]: time="2026-04-21T10:23:05.688299794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 21 10:23:05.695921 containerd[1975]: time="2026-04-21T10:23:05.695874436Z" level=info msg="CreateContainer within sandbox \"eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 21 10:23:05.844074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3063022513.mount: Deactivated successfully. Apr 21 10:23:05.872577 containerd[1975]: time="2026-04-21T10:23:05.872422240Z" level=info msg="CreateContainer within sandbox \"eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f18e00c2601ec51a44d9f55182202a2ea66b894f51738a8f1a18aaf12c04dfd3\"" Apr 21 10:23:05.874637 containerd[1975]: time="2026-04-21T10:23:05.874590825Z" level=info msg="StartContainer for \"f18e00c2601ec51a44d9f55182202a2ea66b894f51738a8f1a18aaf12c04dfd3\"" Apr 21 10:23:05.936092 systemd[1]: Started cri-containerd-f18e00c2601ec51a44d9f55182202a2ea66b894f51738a8f1a18aaf12c04dfd3.scope - libcontainer container f18e00c2601ec51a44d9f55182202a2ea66b894f51738a8f1a18aaf12c04dfd3. Apr 21 10:23:05.986876 containerd[1975]: time="2026-04-21T10:23:05.986825006Z" level=info msg="StartContainer for \"f18e00c2601ec51a44d9f55182202a2ea66b894f51738a8f1a18aaf12c04dfd3\" returns successfully" Apr 21 10:23:06.815477 kubelet[3393]: I0421 10:23:06.815288 3393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7b4964dbc6-vq6mv" podStartSLOduration=32.127654999 podStartE2EDuration="38.815262084s" podCreationTimestamp="2026-04-21 10:22:28 +0000 UTC" firstStartedPulling="2026-04-21 10:22:56.544585781 +0000 UTC m=+44.521540854" lastFinishedPulling="2026-04-21 10:23:03.232192851 +0000 UTC m=+51.209147939" observedRunningTime="2026-04-21 10:23:04.126178187 +0000 UTC m=+52.103133281" watchObservedRunningTime="2026-04-21 10:23:06.815262084 +0000 UTC m=+54.792217182" Apr 21 10:23:09.082789 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2763883003.mount: Deactivated successfully. Apr 21 10:23:10.019394 containerd[1975]: time="2026-04-21T10:23:10.019340826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:23:10.024559 containerd[1975]: time="2026-04-21T10:23:10.023834032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 21 10:23:10.026989 containerd[1975]: time="2026-04-21T10:23:10.026953960Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:23:10.034596 containerd[1975]: time="2026-04-21T10:23:10.032089264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:23:10.034596 containerd[1975]: time="2026-04-21T10:23:10.033215089Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 4.344879876s" Apr 21 10:23:10.034596 containerd[1975]: time="2026-04-21T10:23:10.033248719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 21 10:23:10.069484 containerd[1975]: time="2026-04-21T10:23:10.069297266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 21 10:23:10.145411 containerd[1975]: time="2026-04-21T10:23:10.145369806Z" level=info msg="CreateContainer within sandbox \"0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 21 10:23:10.197898 containerd[1975]: time="2026-04-21T10:23:10.197762384Z" level=info msg="CreateContainer within sandbox \"0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"4aa600cced49338786cc37ff10da17dc7a56720b1096328e67f1be49cbae9d8a\"" Apr 21 10:23:10.199899 containerd[1975]: time="2026-04-21T10:23:10.198608170Z" level=info msg="StartContainer for \"4aa600cced49338786cc37ff10da17dc7a56720b1096328e67f1be49cbae9d8a\"" Apr 21 10:23:10.332354 systemd[1]: Started cri-containerd-4aa600cced49338786cc37ff10da17dc7a56720b1096328e67f1be49cbae9d8a.scope - libcontainer container 4aa600cced49338786cc37ff10da17dc7a56720b1096328e67f1be49cbae9d8a. Apr 21 10:23:10.525173 containerd[1975]: time="2026-04-21T10:23:10.525114720Z" level=info msg="StartContainer for \"4aa600cced49338786cc37ff10da17dc7a56720b1096328e67f1be49cbae9d8a\" returns successfully" Apr 21 10:23:11.169898 systemd[1]: Started sshd@7-172.31.24.37:22-50.85.169.122:49738.service - OpenSSH per-connection server daemon (50.85.169.122:49738). Apr 21 10:23:11.520850 kubelet[3393]: I0421 10:23:11.483188 3393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-hsqq5" podStartSLOduration=31.448217858 podStartE2EDuration="44.441755468s" podCreationTimestamp="2026-04-21 10:22:27 +0000 UTC" firstStartedPulling="2026-04-21 10:22:57.07540426 +0000 UTC m=+45.052359346" lastFinishedPulling="2026-04-21 10:23:10.068941881 +0000 UTC m=+58.045896956" observedRunningTime="2026-04-21 10:23:11.425007341 +0000 UTC m=+59.401962509" watchObservedRunningTime="2026-04-21 10:23:11.441755468 +0000 UTC m=+59.418710579" Apr 21 10:23:12.312053 sshd[5846]: Accepted publickey for core from 50.85.169.122 port 49738 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:23:12.320380 sshd[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:23:12.339611 systemd-logind[1955]: New session 8 of user core. Apr 21 10:23:12.342751 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 21 10:23:12.886296 containerd[1975]: time="2026-04-21T10:23:12.885373742Z" level=info msg="StopPodSandbox for \"d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1\"" Apr 21 10:23:13.080156 systemd[1]: run-containerd-runc-k8s.io-4aa600cced49338786cc37ff10da17dc7a56720b1096328e67f1be49cbae9d8a-runc.XiQu5N.mount: Deactivated successfully. Apr 21 10:23:14.429110 sshd[5846]: pam_unix(sshd:session): session closed for user core Apr 21 10:23:14.442996 systemd[1]: sshd@7-172.31.24.37:22-50.85.169.122:49738.service: Deactivated successfully. Apr 21 10:23:14.462989 systemd[1]: session-8.scope: Deactivated successfully. Apr 21 10:23:14.465518 systemd-logind[1955]: Session 8 logged out. Waiting for processes to exit. Apr 21 10:23:14.469394 systemd-logind[1955]: Removed session 8. Apr 21 10:23:14.681548 containerd[1975]: 2026-04-21 10:23:13.895 [WARNING][5894] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0", GenerateName:"calico-apiserver-7b4964dbc6-", Namespace:"calico-system", SelfLink:"", UID:"36e30077-4d0e-4cfb-85c1-e2be8e459364", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4964dbc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf", Pod:"calico-apiserver-7b4964dbc6-crhn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2edb40b16c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:14.681548 containerd[1975]: 2026-04-21 10:23:13.902 [INFO][5894] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Apr 21 10:23:14.681548 containerd[1975]: 2026-04-21 10:23:13.902 [INFO][5894] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" iface="eth0" netns="" Apr 21 10:23:14.681548 containerd[1975]: 2026-04-21 10:23:13.902 [INFO][5894] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Apr 21 10:23:14.681548 containerd[1975]: 2026-04-21 10:23:13.902 [INFO][5894] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Apr 21 10:23:14.681548 containerd[1975]: 2026-04-21 10:23:14.610 [INFO][5902] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" HandleID="k8s-pod-network.d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0" Apr 21 10:23:14.681548 containerd[1975]: 2026-04-21 10:23:14.614 [INFO][5902] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:14.681548 containerd[1975]: 2026-04-21 10:23:14.615 [INFO][5902] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:14.681548 containerd[1975]: 2026-04-21 10:23:14.649 [WARNING][5902] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" HandleID="k8s-pod-network.d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0" Apr 21 10:23:14.681548 containerd[1975]: 2026-04-21 10:23:14.649 [INFO][5902] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" HandleID="k8s-pod-network.d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0" Apr 21 10:23:14.681548 containerd[1975]: 2026-04-21 10:23:14.655 [INFO][5902] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:14.681548 containerd[1975]: 2026-04-21 10:23:14.670 [INFO][5894] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Apr 21 10:23:14.725260 containerd[1975]: time="2026-04-21T10:23:14.725207168Z" level=info msg="TearDown network for sandbox \"d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1\" successfully" Apr 21 10:23:14.725477 containerd[1975]: time="2026-04-21T10:23:14.725448774Z" level=info msg="StopPodSandbox for \"d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1\" returns successfully" Apr 21 10:23:14.929105 containerd[1975]: time="2026-04-21T10:23:14.929056265Z" level=info msg="RemovePodSandbox for \"d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1\"" Apr 21 10:23:14.937076 containerd[1975]: time="2026-04-21T10:23:14.936951382Z" level=info msg="Forcibly stopping sandbox \"d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1\"" Apr 21 10:23:15.128608 containerd[1975]: 2026-04-21 10:23:15.033 [WARNING][5940] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0", GenerateName:"calico-apiserver-7b4964dbc6-", Namespace:"calico-system", SelfLink:"", UID:"36e30077-4d0e-4cfb-85c1-e2be8e459364", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4964dbc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"513cc4dbd701bb85301d86855b442f3f0385a3939707f069775e832e2b79cbbf", Pod:"calico-apiserver-7b4964dbc6-crhn4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali2edb40b16c8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:15.128608 containerd[1975]: 2026-04-21 10:23:15.033 [INFO][5940] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Apr 21 10:23:15.128608 containerd[1975]: 2026-04-21 10:23:15.033 [INFO][5940] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" iface="eth0" netns="" Apr 21 10:23:15.128608 containerd[1975]: 2026-04-21 10:23:15.033 [INFO][5940] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Apr 21 10:23:15.128608 containerd[1975]: 2026-04-21 10:23:15.033 [INFO][5940] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Apr 21 10:23:15.128608 containerd[1975]: 2026-04-21 10:23:15.102 [INFO][5947] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" HandleID="k8s-pod-network.d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0" Apr 21 10:23:15.128608 containerd[1975]: 2026-04-21 10:23:15.102 [INFO][5947] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:15.128608 containerd[1975]: 2026-04-21 10:23:15.102 [INFO][5947] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:15.128608 containerd[1975]: 2026-04-21 10:23:15.111 [WARNING][5947] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" HandleID="k8s-pod-network.d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0" Apr 21 10:23:15.128608 containerd[1975]: 2026-04-21 10:23:15.111 [INFO][5947] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" HandleID="k8s-pod-network.d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--crhn4-eth0" Apr 21 10:23:15.128608 containerd[1975]: 2026-04-21 10:23:15.113 [INFO][5947] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:15.128608 containerd[1975]: 2026-04-21 10:23:15.118 [INFO][5940] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1" Apr 21 10:23:15.131540 containerd[1975]: time="2026-04-21T10:23:15.128775078Z" level=info msg="TearDown network for sandbox \"d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1\" successfully" Apr 21 10:23:15.186023 containerd[1975]: time="2026-04-21T10:23:15.185968409Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:23:15.224613 containerd[1975]: time="2026-04-21T10:23:15.223397713Z" level=info msg="RemovePodSandbox \"d291b1b87ce0451a97c0f0f3db0406ccc0784057027df31519c049deb02366a1\" returns successfully" Apr 21 10:23:15.231033 containerd[1975]: time="2026-04-21T10:23:15.230641649Z" level=info msg="StopPodSandbox for \"bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be\"" Apr 21 10:23:15.410983 containerd[1975]: 2026-04-21 10:23:15.350 [WARNING][5962] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4fb7221c-ea69-4ca4-82b5-711eb8fdfc35", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd", Pod:"coredns-674b8bbfcf-q9mxk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a32b2160e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:15.410983 containerd[1975]: 2026-04-21 10:23:15.351 [INFO][5962] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Apr 21 10:23:15.410983 containerd[1975]: 2026-04-21 10:23:15.351 [INFO][5962] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" iface="eth0" netns="" Apr 21 10:23:15.410983 containerd[1975]: 2026-04-21 10:23:15.351 [INFO][5962] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Apr 21 10:23:15.410983 containerd[1975]: 2026-04-21 10:23:15.351 [INFO][5962] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Apr 21 10:23:15.410983 containerd[1975]: 2026-04-21 10:23:15.392 [INFO][5969] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" HandleID="k8s-pod-network.bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0" Apr 21 10:23:15.410983 containerd[1975]: 2026-04-21 10:23:15.392 [INFO][5969] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:15.410983 containerd[1975]: 2026-04-21 10:23:15.392 [INFO][5969] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:15.410983 containerd[1975]: 2026-04-21 10:23:15.403 [WARNING][5969] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" HandleID="k8s-pod-network.bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0" Apr 21 10:23:15.410983 containerd[1975]: 2026-04-21 10:23:15.403 [INFO][5969] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" HandleID="k8s-pod-network.bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0" Apr 21 10:23:15.410983 containerd[1975]: 2026-04-21 10:23:15.405 [INFO][5969] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:15.410983 containerd[1975]: 2026-04-21 10:23:15.408 [INFO][5962] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Apr 21 10:23:15.417865 containerd[1975]: time="2026-04-21T10:23:15.411938312Z" level=info msg="TearDown network for sandbox \"bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be\" successfully" Apr 21 10:23:15.417865 containerd[1975]: time="2026-04-21T10:23:15.412224415Z" level=info msg="StopPodSandbox for \"bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be\" returns successfully" Apr 21 10:23:15.417865 containerd[1975]: time="2026-04-21T10:23:15.416270678Z" level=info msg="RemovePodSandbox for \"bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be\"" Apr 21 10:23:15.417865 containerd[1975]: time="2026-04-21T10:23:15.416302588Z" level=info msg="Forcibly stopping sandbox \"bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be\"" Apr 21 10:23:15.583409 containerd[1975]: 2026-04-21 10:23:15.496 [WARNING][5984] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"4fb7221c-ea69-4ca4-82b5-711eb8fdfc35", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"3c7e96e159966c16595b41179b35a790393639137c6efb0e3948f20e461707cd", Pod:"coredns-674b8bbfcf-q9mxk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8a32b2160e7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:15.583409 containerd[1975]: 2026-04-21 10:23:15.496 [INFO][5984] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Apr 21 10:23:15.583409 containerd[1975]: 2026-04-21 10:23:15.496 [INFO][5984] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" iface="eth0" netns="" Apr 21 10:23:15.583409 containerd[1975]: 2026-04-21 10:23:15.496 [INFO][5984] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Apr 21 10:23:15.583409 containerd[1975]: 2026-04-21 10:23:15.496 [INFO][5984] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Apr 21 10:23:15.583409 containerd[1975]: 2026-04-21 10:23:15.559 [INFO][5991] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" HandleID="k8s-pod-network.bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0" Apr 21 10:23:15.583409 containerd[1975]: 2026-04-21 10:23:15.559 [INFO][5991] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:15.583409 containerd[1975]: 2026-04-21 10:23:15.559 [INFO][5991] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:15.583409 containerd[1975]: 2026-04-21 10:23:15.568 [WARNING][5991] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" HandleID="k8s-pod-network.bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0" Apr 21 10:23:15.583409 containerd[1975]: 2026-04-21 10:23:15.568 [INFO][5991] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" HandleID="k8s-pod-network.bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--q9mxk-eth0" Apr 21 10:23:15.583409 containerd[1975]: 2026-04-21 10:23:15.573 [INFO][5991] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:15.583409 containerd[1975]: 2026-04-21 10:23:15.576 [INFO][5984] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be" Apr 21 10:23:15.584169 containerd[1975]: time="2026-04-21T10:23:15.583450217Z" level=info msg="TearDown network for sandbox \"bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be\" successfully" Apr 21 10:23:15.621738 containerd[1975]: time="2026-04-21T10:23:15.620077142Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:23:15.621738 containerd[1975]: time="2026-04-21T10:23:15.620171193Z" level=info msg="RemovePodSandbox \"bb8752a790d66ab4a457c437f4ee462955370ae48e08a3a72a41e57fc9f128be\" returns successfully" Apr 21 10:23:15.621738 containerd[1975]: time="2026-04-21T10:23:15.620786735Z" level=info msg="StopPodSandbox for \"1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c\"" Apr 21 10:23:15.813237 containerd[1975]: 2026-04-21 10:23:15.726 [WARNING][6006] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0da02c82-49c9-40d9-881a-313b594008da", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31", Pod:"csi-node-driver-5kwvj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali51c22aee4f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:15.813237 containerd[1975]: 2026-04-21 10:23:15.727 [INFO][6006] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Apr 21 10:23:15.813237 containerd[1975]: 2026-04-21 10:23:15.727 [INFO][6006] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" iface="eth0" netns="" Apr 21 10:23:15.813237 containerd[1975]: 2026-04-21 10:23:15.727 [INFO][6006] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Apr 21 10:23:15.813237 containerd[1975]: 2026-04-21 10:23:15.727 [INFO][6006] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Apr 21 10:23:15.813237 containerd[1975]: 2026-04-21 10:23:15.772 [INFO][6014] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" HandleID="k8s-pod-network.1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Workload="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0" Apr 21 10:23:15.813237 containerd[1975]: 2026-04-21 10:23:15.772 [INFO][6014] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:15.813237 containerd[1975]: 2026-04-21 10:23:15.772 [INFO][6014] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:15.813237 containerd[1975]: 2026-04-21 10:23:15.796 [WARNING][6014] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" HandleID="k8s-pod-network.1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Workload="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0" Apr 21 10:23:15.813237 containerd[1975]: 2026-04-21 10:23:15.796 [INFO][6014] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" HandleID="k8s-pod-network.1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Workload="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0" Apr 21 10:23:15.813237 containerd[1975]: 2026-04-21 10:23:15.800 [INFO][6014] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:15.813237 containerd[1975]: 2026-04-21 10:23:15.806 [INFO][6006] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Apr 21 10:23:15.813237 containerd[1975]: time="2026-04-21T10:23:15.813097965Z" level=info msg="TearDown network for sandbox \"1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c\" successfully" Apr 21 10:23:15.813237 containerd[1975]: time="2026-04-21T10:23:15.813135966Z" level=info msg="StopPodSandbox for \"1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c\" returns successfully" Apr 21 10:23:15.817190 containerd[1975]: time="2026-04-21T10:23:15.813978450Z" level=info msg="RemovePodSandbox for \"1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c\"" Apr 21 10:23:15.817190 containerd[1975]: time="2026-04-21T10:23:15.814118234Z" level=info msg="Forcibly stopping sandbox \"1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c\"" Apr 21 10:23:15.933587 containerd[1975]: 2026-04-21 10:23:15.884 [WARNING][6028] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0da02c82-49c9-40d9-881a-313b594008da", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31", Pod:"csi-node-driver-5kwvj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.51.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali51c22aee4f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:15.933587 containerd[1975]: 2026-04-21 10:23:15.885 [INFO][6028] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Apr 21 10:23:15.933587 containerd[1975]: 2026-04-21 10:23:15.885 [INFO][6028] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" iface="eth0" netns="" Apr 21 10:23:15.933587 containerd[1975]: 2026-04-21 10:23:15.885 [INFO][6028] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Apr 21 10:23:15.933587 containerd[1975]: 2026-04-21 10:23:15.885 [INFO][6028] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Apr 21 10:23:15.933587 containerd[1975]: 2026-04-21 10:23:15.916 [INFO][6036] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" HandleID="k8s-pod-network.1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Workload="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0" Apr 21 10:23:15.933587 containerd[1975]: 2026-04-21 10:23:15.917 [INFO][6036] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:15.933587 containerd[1975]: 2026-04-21 10:23:15.917 [INFO][6036] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:15.933587 containerd[1975]: 2026-04-21 10:23:15.925 [WARNING][6036] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" HandleID="k8s-pod-network.1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Workload="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0" Apr 21 10:23:15.933587 containerd[1975]: 2026-04-21 10:23:15.926 [INFO][6036] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" HandleID="k8s-pod-network.1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Workload="ip--172--31--24--37-k8s-csi--node--driver--5kwvj-eth0" Apr 21 10:23:15.933587 containerd[1975]: 2026-04-21 10:23:15.928 [INFO][6036] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:15.933587 containerd[1975]: 2026-04-21 10:23:15.930 [INFO][6028] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c" Apr 21 10:23:15.933587 containerd[1975]: time="2026-04-21T10:23:15.933441917Z" level=info msg="TearDown network for sandbox \"1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c\" successfully" Apr 21 10:23:16.126457 containerd[1975]: time="2026-04-21T10:23:16.126407073Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:23:16.126653 containerd[1975]: time="2026-04-21T10:23:16.126496065Z" level=info msg="RemovePodSandbox \"1b788928bcf1035c8f3f3b0072f20485b4d977738e96aea7ce7b1db30aaeb13c\" returns successfully" Apr 21 10:23:16.127449 containerd[1975]: time="2026-04-21T10:23:16.127405011Z" level=info msg="StopPodSandbox for \"b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5\"" Apr 21 10:23:16.135094 containerd[1975]: time="2026-04-21T10:23:16.134579691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:23:16.168904 containerd[1975]: time="2026-04-21T10:23:16.168673642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 21 10:23:16.227979 containerd[1975]: time="2026-04-21T10:23:16.227135165Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:23:16.238690 containerd[1975]: time="2026-04-21T10:23:16.238627271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:23:16.246545 containerd[1975]: time="2026-04-21T10:23:16.244789052Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 6.175442016s" Apr 21 10:23:16.246545 containerd[1975]: time="2026-04-21T10:23:16.246212568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 21 10:23:16.407547 containerd[1975]: time="2026-04-21T10:23:16.406920456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 21 10:23:16.522676 containerd[1975]: 2026-04-21 10:23:16.316 [WARNING][6050] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0", GenerateName:"whisker-54c9c74cc-", Namespace:"calico-system", SelfLink:"", UID:"18a66424-432c-43b2-9b85-3b805c5d2979", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54c9c74cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f", Pod:"whisker-54c9c74cc-nh8pr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2e9ca1d4f02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:16.522676 containerd[1975]: 2026-04-21 10:23:16.316 [INFO][6050] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Apr 21 10:23:16.522676 containerd[1975]: 2026-04-21 10:23:16.316 [INFO][6050] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" iface="eth0" netns="" Apr 21 10:23:16.522676 containerd[1975]: 2026-04-21 10:23:16.317 [INFO][6050] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Apr 21 10:23:16.522676 containerd[1975]: 2026-04-21 10:23:16.317 [INFO][6050] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Apr 21 10:23:16.522676 containerd[1975]: 2026-04-21 10:23:16.467 [INFO][6058] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" HandleID="k8s-pod-network.b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:23:16.522676 containerd[1975]: 2026-04-21 10:23:16.467 [INFO][6058] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:16.522676 containerd[1975]: 2026-04-21 10:23:16.467 [INFO][6058] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:16.522676 containerd[1975]: 2026-04-21 10:23:16.500 [WARNING][6058] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" HandleID="k8s-pod-network.b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:23:16.522676 containerd[1975]: 2026-04-21 10:23:16.500 [INFO][6058] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" HandleID="k8s-pod-network.b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:23:16.522676 containerd[1975]: 2026-04-21 10:23:16.503 [INFO][6058] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:16.522676 containerd[1975]: 2026-04-21 10:23:16.510 [INFO][6050] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Apr 21 10:23:16.522676 containerd[1975]: time="2026-04-21T10:23:16.522654468Z" level=info msg="TearDown network for sandbox \"b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5\" successfully" Apr 21 10:23:16.526786 containerd[1975]: time="2026-04-21T10:23:16.522686270Z" level=info msg="StopPodSandbox for \"b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5\" returns successfully" Apr 21 10:23:16.561122 containerd[1975]: time="2026-04-21T10:23:16.561078934Z" level=info msg="RemovePodSandbox for \"b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5\"" Apr 21 10:23:16.561122 containerd[1975]: time="2026-04-21T10:23:16.561126821Z" level=info msg="Forcibly stopping sandbox \"b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5\"" Apr 21 10:23:16.764116 containerd[1975]: 2026-04-21 10:23:16.689 [WARNING][6074] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0", GenerateName:"whisker-54c9c74cc-", Namespace:"calico-system", SelfLink:"", UID:"18a66424-432c-43b2-9b85-3b805c5d2979", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54c9c74cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f", Pod:"whisker-54c9c74cc-nh8pr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali2e9ca1d4f02", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:16.764116 containerd[1975]: 2026-04-21 10:23:16.689 [INFO][6074] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Apr 21 10:23:16.764116 containerd[1975]: 2026-04-21 10:23:16.689 [INFO][6074] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" iface="eth0" netns="" Apr 21 10:23:16.764116 containerd[1975]: 2026-04-21 10:23:16.689 [INFO][6074] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Apr 21 10:23:16.764116 containerd[1975]: 2026-04-21 10:23:16.689 [INFO][6074] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Apr 21 10:23:16.764116 containerd[1975]: 2026-04-21 10:23:16.747 [INFO][6082] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" HandleID="k8s-pod-network.b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:23:16.764116 containerd[1975]: 2026-04-21 10:23:16.747 [INFO][6082] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:16.764116 containerd[1975]: 2026-04-21 10:23:16.747 [INFO][6082] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:16.764116 containerd[1975]: 2026-04-21 10:23:16.756 [WARNING][6082] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" HandleID="k8s-pod-network.b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:23:16.764116 containerd[1975]: 2026-04-21 10:23:16.757 [INFO][6082] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" HandleID="k8s-pod-network.b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:23:16.764116 containerd[1975]: 2026-04-21 10:23:16.759 [INFO][6082] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:16.764116 containerd[1975]: 2026-04-21 10:23:16.761 [INFO][6074] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5" Apr 21 10:23:16.768738 containerd[1975]: time="2026-04-21T10:23:16.764304540Z" level=info msg="TearDown network for sandbox \"b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5\" successfully" Apr 21 10:23:16.780984 containerd[1975]: time="2026-04-21T10:23:16.780303703Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:23:16.780984 containerd[1975]: time="2026-04-21T10:23:16.780403387Z" level=info msg="RemovePodSandbox \"b2b8166fa55161c4536a4df09c09394d77ad1fc96cc7d814f44c5ab867e751e5\" returns successfully" Apr 21 10:23:16.800327 containerd[1975]: time="2026-04-21T10:23:16.800273696Z" level=info msg="StopPodSandbox for \"40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe\"" Apr 21 10:23:16.943427 containerd[1975]: time="2026-04-21T10:23:16.943362325Z" level=info msg="CreateContainer within sandbox \"e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 21 10:23:17.014889 containerd[1975]: 2026-04-21 10:23:16.899 [WARNING][6098] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e0245abf-1cb1-48b9-b736-006cc52f0a7d", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6", Pod:"coredns-674b8bbfcf-kq922", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic104c47f4a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:17.014889 containerd[1975]: 2026-04-21 10:23:16.900 [INFO][6098] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Apr 21 10:23:17.014889 containerd[1975]: 2026-04-21 10:23:16.900 [INFO][6098] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" iface="eth0" netns="" Apr 21 10:23:17.014889 containerd[1975]: 2026-04-21 10:23:16.900 [INFO][6098] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Apr 21 10:23:17.014889 containerd[1975]: 2026-04-21 10:23:16.900 [INFO][6098] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Apr 21 10:23:17.014889 containerd[1975]: 2026-04-21 10:23:16.957 [INFO][6105] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" HandleID="k8s-pod-network.40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0" Apr 21 10:23:17.014889 containerd[1975]: 2026-04-21 10:23:16.958 [INFO][6105] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:17.014889 containerd[1975]: 2026-04-21 10:23:16.958 [INFO][6105] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:17.014889 containerd[1975]: 2026-04-21 10:23:16.983 [WARNING][6105] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" HandleID="k8s-pod-network.40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0" Apr 21 10:23:17.014889 containerd[1975]: 2026-04-21 10:23:16.983 [INFO][6105] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" HandleID="k8s-pod-network.40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0" Apr 21 10:23:17.014889 containerd[1975]: 2026-04-21 10:23:16.987 [INFO][6105] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:17.014889 containerd[1975]: 2026-04-21 10:23:16.992 [INFO][6098] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Apr 21 10:23:17.014889 containerd[1975]: time="2026-04-21T10:23:17.014729392Z" level=info msg="TearDown network for sandbox \"40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe\" successfully" Apr 21 10:23:17.014889 containerd[1975]: time="2026-04-21T10:23:17.014764369Z" level=info msg="StopPodSandbox for \"40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe\" returns successfully" Apr 21 10:23:17.023584 containerd[1975]: time="2026-04-21T10:23:17.023509338Z" level=info msg="RemovePodSandbox for \"40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe\"" Apr 21 10:23:17.023584 containerd[1975]: time="2026-04-21T10:23:17.023582541Z" level=info msg="Forcibly stopping sandbox \"40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe\"" Apr 21 10:23:17.188650 containerd[1975]: time="2026-04-21T10:23:17.188603305Z" level=info msg="CreateContainer within sandbox \"e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"b4374ae899ed982787b865ecbd8bd3eb7f94dd14de7ab3aced8a0f809587eefd\"" Apr 21 10:23:17.196961 containerd[1975]: time="2026-04-21T10:23:17.196847478Z" level=info msg="StartContainer for \"b4374ae899ed982787b865ecbd8bd3eb7f94dd14de7ab3aced8a0f809587eefd\"" Apr 21 10:23:17.201683 containerd[1975]: 2026-04-21 10:23:17.121 [WARNING][6119] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e0245abf-1cb1-48b9-b736-006cc52f0a7d", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"9be6213560d850d15efc9686b63f340568c5f9004bc46e3adea41522f5c6d4d6", Pod:"coredns-674b8bbfcf-kq922", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic104c47f4a4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:17.201683 containerd[1975]: 2026-04-21 10:23:17.121 [INFO][6119] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Apr 21 10:23:17.201683 containerd[1975]: 2026-04-21 10:23:17.121 [INFO][6119] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" iface="eth0" netns="" Apr 21 10:23:17.201683 containerd[1975]: 2026-04-21 10:23:17.122 [INFO][6119] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Apr 21 10:23:17.201683 containerd[1975]: 2026-04-21 10:23:17.122 [INFO][6119] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Apr 21 10:23:17.201683 containerd[1975]: 2026-04-21 10:23:17.173 [INFO][6126] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" HandleID="k8s-pod-network.40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0" Apr 21 10:23:17.201683 containerd[1975]: 2026-04-21 10:23:17.174 [INFO][6126] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:17.201683 containerd[1975]: 2026-04-21 10:23:17.174 [INFO][6126] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:17.201683 containerd[1975]: 2026-04-21 10:23:17.186 [WARNING][6126] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" HandleID="k8s-pod-network.40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0" Apr 21 10:23:17.201683 containerd[1975]: 2026-04-21 10:23:17.186 [INFO][6126] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" HandleID="k8s-pod-network.40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Workload="ip--172--31--24--37-k8s-coredns--674b8bbfcf--kq922-eth0" Apr 21 10:23:17.201683 containerd[1975]: 2026-04-21 10:23:17.189 [INFO][6126] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:17.201683 containerd[1975]: 2026-04-21 10:23:17.193 [INFO][6119] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe" Apr 21 10:23:17.203897 containerd[1975]: time="2026-04-21T10:23:17.203191337Z" level=info msg="TearDown network for sandbox \"40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe\" successfully" Apr 21 10:23:17.209817 containerd[1975]: time="2026-04-21T10:23:17.209773762Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:23:17.210047 containerd[1975]: time="2026-04-21T10:23:17.210027466Z" level=info msg="RemovePodSandbox \"40247accd23ee95b03781c85fbf6f8be551708cc749e86ea8acf9ffb52b891fe\" returns successfully" Apr 21 10:23:17.212604 containerd[1975]: time="2026-04-21T10:23:17.212571551Z" level=info msg="StopPodSandbox for \"6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade\"" Apr 21 10:23:17.362649 containerd[1975]: 2026-04-21 10:23:17.302 [WARNING][6143] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0", GenerateName:"calico-apiserver-7b4964dbc6-", Namespace:"calico-system", SelfLink:"", UID:"d0515f99-cc48-4126-aab8-41d534ccbd0f", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4964dbc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe", Pod:"calico-apiserver-7b4964dbc6-vq6mv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali076c28a381c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:17.362649 containerd[1975]: 2026-04-21 10:23:17.302 [INFO][6143] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Apr 21 10:23:17.362649 containerd[1975]: 2026-04-21 10:23:17.303 [INFO][6143] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" iface="eth0" netns="" Apr 21 10:23:17.362649 containerd[1975]: 2026-04-21 10:23:17.303 [INFO][6143] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Apr 21 10:23:17.362649 containerd[1975]: 2026-04-21 10:23:17.303 [INFO][6143] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Apr 21 10:23:17.362649 containerd[1975]: 2026-04-21 10:23:17.341 [INFO][6152] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" HandleID="k8s-pod-network.6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0" Apr 21 10:23:17.362649 containerd[1975]: 2026-04-21 10:23:17.342 [INFO][6152] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:17.362649 containerd[1975]: 2026-04-21 10:23:17.342 [INFO][6152] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:17.362649 containerd[1975]: 2026-04-21 10:23:17.354 [WARNING][6152] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" HandleID="k8s-pod-network.6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0" Apr 21 10:23:17.362649 containerd[1975]: 2026-04-21 10:23:17.355 [INFO][6152] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" HandleID="k8s-pod-network.6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0" Apr 21 10:23:17.362649 containerd[1975]: 2026-04-21 10:23:17.357 [INFO][6152] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:17.362649 containerd[1975]: 2026-04-21 10:23:17.360 [INFO][6143] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Apr 21 10:23:17.365376 containerd[1975]: time="2026-04-21T10:23:17.363066648Z" level=info msg="TearDown network for sandbox \"6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade\" successfully" Apr 21 10:23:17.365376 containerd[1975]: time="2026-04-21T10:23:17.363104238Z" level=info msg="StopPodSandbox for \"6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade\" returns successfully" Apr 21 10:23:17.365611 containerd[1975]: time="2026-04-21T10:23:17.365264030Z" level=info msg="RemovePodSandbox for \"6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade\"" Apr 21 10:23:17.365611 containerd[1975]: time="2026-04-21T10:23:17.365514830Z" level=info msg="Forcibly stopping sandbox \"6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade\"" Apr 21 10:23:17.505168 containerd[1975]: 2026-04-21 10:23:17.439 [WARNING][6170] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0", GenerateName:"calico-apiserver-7b4964dbc6-", Namespace:"calico-system", SelfLink:"", UID:"d0515f99-cc48-4126-aab8-41d534ccbd0f", ResourceVersion:"1026", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7b4964dbc6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"5fe7675fcf941965764ab18c8b93b2afdee757ed4fb9eb942dc3fa005e5856fe", Pod:"calico-apiserver-7b4964dbc6-vq6mv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali076c28a381c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:17.505168 containerd[1975]: 2026-04-21 10:23:17.439 [INFO][6170] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Apr 21 10:23:17.505168 containerd[1975]: 2026-04-21 10:23:17.439 [INFO][6170] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" iface="eth0" netns="" Apr 21 10:23:17.505168 containerd[1975]: 2026-04-21 10:23:17.439 [INFO][6170] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Apr 21 10:23:17.505168 containerd[1975]: 2026-04-21 10:23:17.439 [INFO][6170] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Apr 21 10:23:17.505168 containerd[1975]: 2026-04-21 10:23:17.478 [INFO][6177] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" HandleID="k8s-pod-network.6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0" Apr 21 10:23:17.505168 containerd[1975]: 2026-04-21 10:23:17.478 [INFO][6177] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:17.505168 containerd[1975]: 2026-04-21 10:23:17.479 [INFO][6177] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:17.505168 containerd[1975]: 2026-04-21 10:23:17.491 [WARNING][6177] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" HandleID="k8s-pod-network.6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0" Apr 21 10:23:17.505168 containerd[1975]: 2026-04-21 10:23:17.491 [INFO][6177] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" HandleID="k8s-pod-network.6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Workload="ip--172--31--24--37-k8s-calico--apiserver--7b4964dbc6--vq6mv-eth0" Apr 21 10:23:17.505168 containerd[1975]: 2026-04-21 10:23:17.494 [INFO][6177] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:17.505168 containerd[1975]: 2026-04-21 10:23:17.498 [INFO][6170] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade" Apr 21 10:23:17.505168 containerd[1975]: time="2026-04-21T10:23:17.503341876Z" level=info msg="TearDown network for sandbox \"6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade\" successfully" Apr 21 10:23:17.524382 containerd[1975]: time="2026-04-21T10:23:17.524137039Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:23:17.524382 containerd[1975]: time="2026-04-21T10:23:17.524248731Z" level=info msg="RemovePodSandbox \"6712334960aed1edfd3c51771811c80e25aebff7eca564ea2067b8671966fade\" returns successfully" Apr 21 10:23:17.525958 containerd[1975]: time="2026-04-21T10:23:17.525919321Z" level=info msg="StopPodSandbox for \"01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed\"" Apr 21 10:23:17.763134 containerd[1975]: 2026-04-21 10:23:17.598 [WARNING][6191] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0", GenerateName:"calico-kube-controllers-687db6948-", Namespace:"calico-system", SelfLink:"", UID:"83fea0e3-1b93-4c76-acf8-8d0eb96c26b9", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"687db6948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa", Pod:"calico-kube-controllers-687db6948-llqks", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali41eda097e0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:17.763134 containerd[1975]: 2026-04-21 10:23:17.599 [INFO][6191] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Apr 21 10:23:17.763134 containerd[1975]: 2026-04-21 10:23:17.599 [INFO][6191] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" iface="eth0" netns="" Apr 21 10:23:17.763134 containerd[1975]: 2026-04-21 10:23:17.599 [INFO][6191] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Apr 21 10:23:17.763134 containerd[1975]: 2026-04-21 10:23:17.599 [INFO][6191] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Apr 21 10:23:17.763134 containerd[1975]: 2026-04-21 10:23:17.695 [INFO][6198] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" HandleID="k8s-pod-network.01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Workload="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0" Apr 21 10:23:17.763134 containerd[1975]: 2026-04-21 10:23:17.696 [INFO][6198] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:17.763134 containerd[1975]: 2026-04-21 10:23:17.696 [INFO][6198] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:17.763134 containerd[1975]: 2026-04-21 10:23:17.728 [WARNING][6198] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" HandleID="k8s-pod-network.01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Workload="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0" Apr 21 10:23:17.763134 containerd[1975]: 2026-04-21 10:23:17.728 [INFO][6198] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" HandleID="k8s-pod-network.01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Workload="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0" Apr 21 10:23:17.763134 containerd[1975]: 2026-04-21 10:23:17.733 [INFO][6198] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:17.763134 containerd[1975]: 2026-04-21 10:23:17.741 [INFO][6191] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Apr 21 10:23:17.766276 containerd[1975]: time="2026-04-21T10:23:17.765255496Z" level=info msg="TearDown network for sandbox \"01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed\" successfully" Apr 21 10:23:17.766276 containerd[1975]: time="2026-04-21T10:23:17.765295927Z" level=info msg="StopPodSandbox for \"01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed\" returns successfully" Apr 21 10:23:17.768111 containerd[1975]: time="2026-04-21T10:23:17.767473980Z" level=info msg="RemovePodSandbox for \"01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed\"" Apr 21 10:23:17.768111 containerd[1975]: time="2026-04-21T10:23:17.767545307Z" level=info msg="Forcibly stopping sandbox \"01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed\"" Apr 21 10:23:17.819778 systemd[1]: Started cri-containerd-b4374ae899ed982787b865ecbd8bd3eb7f94dd14de7ab3aced8a0f809587eefd.scope - libcontainer container b4374ae899ed982787b865ecbd8bd3eb7f94dd14de7ab3aced8a0f809587eefd. Apr 21 10:23:17.958230 containerd[1975]: time="2026-04-21T10:23:17.958183135Z" level=info msg="StartContainer for \"b4374ae899ed982787b865ecbd8bd3eb7f94dd14de7ab3aced8a0f809587eefd\" returns successfully" Apr 21 10:23:18.036455 containerd[1975]: 2026-04-21 10:23:17.926 [WARNING][6225] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0", GenerateName:"calico-kube-controllers-687db6948-", Namespace:"calico-system", SelfLink:"", UID:"83fea0e3-1b93-4c76-acf8-8d0eb96c26b9", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"687db6948", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"e856390b78cfec82e161ad9911a021d1c4b6d712264981bac70f21c20ca1a0aa", Pod:"calico-kube-controllers-687db6948-llqks", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali41eda097e0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:18.036455 containerd[1975]: 2026-04-21 10:23:17.926 [INFO][6225] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Apr 21 10:23:18.036455 containerd[1975]: 2026-04-21 10:23:17.927 [INFO][6225] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" iface="eth0" netns="" Apr 21 10:23:18.036455 containerd[1975]: 2026-04-21 10:23:17.927 [INFO][6225] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Apr 21 10:23:18.036455 containerd[1975]: 2026-04-21 10:23:17.927 [INFO][6225] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Apr 21 10:23:18.036455 containerd[1975]: 2026-04-21 10:23:18.008 [INFO][6240] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" HandleID="k8s-pod-network.01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Workload="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0" Apr 21 10:23:18.036455 containerd[1975]: 2026-04-21 10:23:18.009 [INFO][6240] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:18.036455 containerd[1975]: 2026-04-21 10:23:18.009 [INFO][6240] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:18.036455 containerd[1975]: 2026-04-21 10:23:18.026 [WARNING][6240] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" HandleID="k8s-pod-network.01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Workload="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0" Apr 21 10:23:18.036455 containerd[1975]: 2026-04-21 10:23:18.026 [INFO][6240] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" HandleID="k8s-pod-network.01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Workload="ip--172--31--24--37-k8s-calico--kube--controllers--687db6948--llqks-eth0" Apr 21 10:23:18.036455 containerd[1975]: 2026-04-21 10:23:18.029 [INFO][6240] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:18.036455 containerd[1975]: 2026-04-21 10:23:18.033 [INFO][6225] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed" Apr 21 10:23:18.040845 containerd[1975]: time="2026-04-21T10:23:18.038113383Z" level=info msg="TearDown network for sandbox \"01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed\" successfully" Apr 21 10:23:18.046343 containerd[1975]: time="2026-04-21T10:23:18.046282859Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:23:18.046490 containerd[1975]: time="2026-04-21T10:23:18.046391892Z" level=info msg="RemovePodSandbox \"01efe08a9b7d380617c71221425a7576c129cf4fcc4a6ce930575859ad344eed\" returns successfully" Apr 21 10:23:18.047381 containerd[1975]: time="2026-04-21T10:23:18.047002320Z" level=info msg="StopPodSandbox for \"95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a\"" Apr 21 10:23:18.153336 containerd[1975]: 2026-04-21 10:23:18.104 [WARNING][6268] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"126f393b-3d88-44db-b88f-944f8fffc842", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3", Pod:"goldmane-5b85766d88-hsqq5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali40837c7ba91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:18.153336 containerd[1975]: 2026-04-21 10:23:18.104 [INFO][6268] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Apr 21 10:23:18.153336 containerd[1975]: 2026-04-21 10:23:18.104 [INFO][6268] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" iface="eth0" netns="" Apr 21 10:23:18.153336 containerd[1975]: 2026-04-21 10:23:18.104 [INFO][6268] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Apr 21 10:23:18.153336 containerd[1975]: 2026-04-21 10:23:18.104 [INFO][6268] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Apr 21 10:23:18.153336 containerd[1975]: 2026-04-21 10:23:18.137 [INFO][6278] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" HandleID="k8s-pod-network.95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Workload="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0" Apr 21 10:23:18.153336 containerd[1975]: 2026-04-21 10:23:18.137 [INFO][6278] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:18.153336 containerd[1975]: 2026-04-21 10:23:18.137 [INFO][6278] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:18.153336 containerd[1975]: 2026-04-21 10:23:18.144 [WARNING][6278] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" HandleID="k8s-pod-network.95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Workload="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0" Apr 21 10:23:18.153336 containerd[1975]: 2026-04-21 10:23:18.144 [INFO][6278] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" HandleID="k8s-pod-network.95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Workload="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0" Apr 21 10:23:18.153336 containerd[1975]: 2026-04-21 10:23:18.148 [INFO][6278] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:18.153336 containerd[1975]: 2026-04-21 10:23:18.150 [INFO][6268] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Apr 21 10:23:18.153336 containerd[1975]: time="2026-04-21T10:23:18.153097040Z" level=info msg="TearDown network for sandbox \"95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a\" successfully" Apr 21 10:23:18.153336 containerd[1975]: time="2026-04-21T10:23:18.153124929Z" level=info msg="StopPodSandbox for \"95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a\" returns successfully" Apr 21 10:23:18.154802 containerd[1975]: time="2026-04-21T10:23:18.154347036Z" level=info msg="RemovePodSandbox for \"95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a\"" Apr 21 10:23:18.154802 containerd[1975]: time="2026-04-21T10:23:18.154383126Z" level=info msg="Forcibly stopping sandbox \"95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a\"" Apr 21 10:23:18.319576 containerd[1975]: 2026-04-21 10:23:18.221 [WARNING][6293] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"126f393b-3d88-44db-b88f-944f8fffc842", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 22, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"0e90c77dd3369600d2d7faf505005cb704e5b48b7ade1bb205ef2a1fdc07d5d3", Pod:"goldmane-5b85766d88-hsqq5", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.51.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali40837c7ba91", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:18.319576 containerd[1975]: 2026-04-21 10:23:18.223 [INFO][6293] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Apr 21 10:23:18.319576 containerd[1975]: 2026-04-21 10:23:18.223 [INFO][6293] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" iface="eth0" netns="" Apr 21 10:23:18.319576 containerd[1975]: 2026-04-21 10:23:18.223 [INFO][6293] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Apr 21 10:23:18.319576 containerd[1975]: 2026-04-21 10:23:18.223 [INFO][6293] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Apr 21 10:23:18.319576 containerd[1975]: 2026-04-21 10:23:18.296 [INFO][6300] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" HandleID="k8s-pod-network.95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Workload="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0" Apr 21 10:23:18.319576 containerd[1975]: 2026-04-21 10:23:18.297 [INFO][6300] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:18.319576 containerd[1975]: 2026-04-21 10:23:18.297 [INFO][6300] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:18.319576 containerd[1975]: 2026-04-21 10:23:18.311 [WARNING][6300] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" HandleID="k8s-pod-network.95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Workload="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0" Apr 21 10:23:18.319576 containerd[1975]: 2026-04-21 10:23:18.311 [INFO][6300] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" HandleID="k8s-pod-network.95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Workload="ip--172--31--24--37-k8s-goldmane--5b85766d88--hsqq5-eth0" Apr 21 10:23:18.319576 containerd[1975]: 2026-04-21 10:23:18.313 [INFO][6300] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:18.319576 containerd[1975]: 2026-04-21 10:23:18.316 [INFO][6293] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a" Apr 21 10:23:18.319576 containerd[1975]: time="2026-04-21T10:23:18.319325166Z" level=info msg="TearDown network for sandbox \"95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a\" successfully" Apr 21 10:23:18.326889 containerd[1975]: time="2026-04-21T10:23:18.326646746Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:23:18.326889 containerd[1975]: time="2026-04-21T10:23:18.326745649Z" level=info msg="RemovePodSandbox \"95a7f026d8f8319bbb6dbaa9438500e2b6f657d74315cc28fe5555944fc8888a\" returns successfully" Apr 21 10:23:19.085382 kubelet[3393]: I0421 10:23:19.079830 3393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-687db6948-llqks" podStartSLOduration=31.044675208 podStartE2EDuration="50.053852336s" podCreationTimestamp="2026-04-21 10:22:29 +0000 UTC" firstStartedPulling="2026-04-21 10:22:57.314539322 +0000 UTC m=+45.291494400" lastFinishedPulling="2026-04-21 10:23:16.32371643 +0000 UTC m=+64.300671528" observedRunningTime="2026-04-21 10:23:18.968982827 +0000 UTC m=+66.945937936" watchObservedRunningTime="2026-04-21 10:23:19.053852336 +0000 UTC m=+67.030807430" Apr 21 10:23:19.207932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1728460274.mount: Deactivated successfully. Apr 21 10:23:19.224677 containerd[1975]: time="2026-04-21T10:23:19.224629672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:23:19.226622 containerd[1975]: time="2026-04-21T10:23:19.226474223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 21 10:23:19.229567 containerd[1975]: time="2026-04-21T10:23:19.229168952Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:23:19.232970 containerd[1975]: time="2026-04-21T10:23:19.232925366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:23:19.233867 containerd[1975]: time="2026-04-21T10:23:19.233829330Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 2.826841181s" Apr 21 10:23:19.234031 containerd[1975]: time="2026-04-21T10:23:19.234007215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 21 10:23:19.294217 containerd[1975]: time="2026-04-21T10:23:19.293883127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 21 10:23:19.328192 containerd[1975]: time="2026-04-21T10:23:19.328144629Z" level=info msg="CreateContainer within sandbox \"c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 10:23:19.349572 containerd[1975]: time="2026-04-21T10:23:19.349442888Z" level=info msg="CreateContainer within sandbox \"c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c\"" Apr 21 10:23:19.352339 containerd[1975]: time="2026-04-21T10:23:19.351869855Z" level=info msg="StartContainer for \"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c\"" Apr 21 10:23:19.414256 systemd[1]: Started cri-containerd-7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c.scope - libcontainer container 7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c. Apr 21 10:23:19.478493 containerd[1975]: time="2026-04-21T10:23:19.478442251Z" level=info msg="StartContainer for \"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c\" returns successfully" Apr 21 10:23:19.650642 systemd[1]: Started sshd@8-172.31.24.37:22-50.85.169.122:49746.service - OpenSSH per-connection server daemon (50.85.169.122:49746). Apr 21 10:23:20.189808 containerd[1975]: time="2026-04-21T10:23:20.188689737Z" level=info msg="StopContainer for \"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed\" with timeout 30 (s)" Apr 21 10:23:20.189808 containerd[1975]: time="2026-04-21T10:23:20.188752038Z" level=info msg="StopContainer for \"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c\" with timeout 30 (s)" Apr 21 10:23:20.194293 containerd[1975]: time="2026-04-21T10:23:20.194246593Z" level=info msg="Stop container \"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c\" with signal terminated" Apr 21 10:23:20.195558 containerd[1975]: time="2026-04-21T10:23:20.194697719Z" level=info msg="Stop container \"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed\" with signal terminated" Apr 21 10:23:20.239887 systemd[1]: cri-containerd-7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c.scope: Deactivated successfully. Apr 21 10:23:20.286154 systemd[1]: cri-containerd-71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed.scope: Deactivated successfully. Apr 21 10:23:20.335030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c-rootfs.mount: Deactivated successfully. Apr 21 10:23:20.360421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed-rootfs.mount: Deactivated successfully. Apr 21 10:23:20.396504 kubelet[3393]: I0421 10:23:20.396198 3393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-54c9c74cc-nh8pr" podStartSLOduration=23.632391492 podStartE2EDuration="46.396175033s" podCreationTimestamp="2026-04-21 10:22:34 +0000 UTC" firstStartedPulling="2026-04-21 10:22:56.529857333 +0000 UTC m=+44.506812406" lastFinishedPulling="2026-04-21 10:23:19.293640862 +0000 UTC m=+67.270595947" observedRunningTime="2026-04-21 10:23:20.20860437 +0000 UTC m=+68.185559564" watchObservedRunningTime="2026-04-21 10:23:20.396175033 +0000 UTC m=+68.373130127" Apr 21 10:23:20.397197 containerd[1975]: time="2026-04-21T10:23:20.349714932Z" level=info msg="shim disconnected" id=7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c namespace=k8s.io Apr 21 10:23:20.397197 containerd[1975]: time="2026-04-21T10:23:20.396679830Z" level=warning msg="cleaning up after shim disconnected" id=7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c namespace=k8s.io Apr 21 10:23:20.397197 containerd[1975]: time="2026-04-21T10:23:20.365957040Z" level=info msg="shim disconnected" id=71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed namespace=k8s.io Apr 21 10:23:20.397197 containerd[1975]: time="2026-04-21T10:23:20.396765101Z" level=warning msg="cleaning up after shim disconnected" id=71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed namespace=k8s.io Apr 21 10:23:20.397197 containerd[1975]: time="2026-04-21T10:23:20.396777812Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:23:20.397197 containerd[1975]: time="2026-04-21T10:23:20.396977037Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:23:20.455136 containerd[1975]: time="2026-04-21T10:23:20.455011289Z" level=warning msg="cleanup warnings time=\"2026-04-21T10:23:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 21 10:23:20.466907 containerd[1975]: time="2026-04-21T10:23:20.466688511Z" level=info msg="StopContainer for \"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c\" returns successfully" Apr 21 10:23:20.485673 containerd[1975]: time="2026-04-21T10:23:20.485621805Z" level=info msg="StopContainer for \"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed\" returns successfully" Apr 21 10:23:20.486370 containerd[1975]: time="2026-04-21T10:23:20.486338491Z" level=info msg="StopPodSandbox for \"c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f\"" Apr 21 10:23:20.491010 containerd[1975]: time="2026-04-21T10:23:20.490965679Z" level=info msg="Container to stop \"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:23:20.491010 containerd[1975]: time="2026-04-21T10:23:20.491006231Z" level=info msg="Container to stop \"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Apr 21 10:23:20.496153 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f-shm.mount: Deactivated successfully. Apr 21 10:23:20.509119 systemd[1]: cri-containerd-c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f.scope: Deactivated successfully. Apr 21 10:23:20.535403 containerd[1975]: time="2026-04-21T10:23:20.535148179Z" level=info msg="shim disconnected" id=c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f namespace=k8s.io Apr 21 10:23:20.535403 containerd[1975]: time="2026-04-21T10:23:20.535212698Z" level=warning msg="cleaning up after shim disconnected" id=c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f namespace=k8s.io Apr 21 10:23:20.535403 containerd[1975]: time="2026-04-21T10:23:20.535223956Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 21 10:23:20.538313 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f-rootfs.mount: Deactivated successfully. Apr 21 10:23:20.656065 systemd-networkd[1897]: cali2e9ca1d4f02: Link DOWN Apr 21 10:23:20.656097 systemd-networkd[1897]: cali2e9ca1d4f02: Lost carrier Apr 21 10:23:20.787291 sshd[6356]: Accepted publickey for core from 50.85.169.122 port 49746 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:23:20.793923 sshd[6356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:23:20.813941 systemd-logind[1955]: New session 9 of user core. Apr 21 10:23:20.822748 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 21 10:23:20.823564 containerd[1975]: 2026-04-21 10:23:20.647 [INFO][6498] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Apr 21 10:23:20.823564 containerd[1975]: 2026-04-21 10:23:20.648 [INFO][6498] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" iface="eth0" netns="/var/run/netns/cni-a592b1dc-62ee-aeb1-f5c8-aaec0e3ccb71" Apr 21 10:23:20.823564 containerd[1975]: 2026-04-21 10:23:20.649 [INFO][6498] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" iface="eth0" netns="/var/run/netns/cni-a592b1dc-62ee-aeb1-f5c8-aaec0e3ccb71" Apr 21 10:23:20.823564 containerd[1975]: 2026-04-21 10:23:20.667 [INFO][6498] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" after=19.239343ms iface="eth0" netns="/var/run/netns/cni-a592b1dc-62ee-aeb1-f5c8-aaec0e3ccb71" Apr 21 10:23:20.823564 containerd[1975]: 2026-04-21 10:23:20.667 [INFO][6498] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Apr 21 10:23:20.823564 containerd[1975]: 2026-04-21 10:23:20.667 [INFO][6498] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Apr 21 10:23:20.823564 containerd[1975]: 2026-04-21 10:23:20.702 [INFO][6506] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" HandleID="k8s-pod-network.c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:23:20.823564 containerd[1975]: 2026-04-21 10:23:20.702 [INFO][6506] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:20.823564 containerd[1975]: 2026-04-21 10:23:20.703 [INFO][6506] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:20.823564 containerd[1975]: 2026-04-21 10:23:20.790 [INFO][6506] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" HandleID="k8s-pod-network.c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:23:20.823564 containerd[1975]: 2026-04-21 10:23:20.790 [INFO][6506] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" HandleID="k8s-pod-network.c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:23:20.823564 containerd[1975]: 2026-04-21 10:23:20.794 [INFO][6506] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:20.823564 containerd[1975]: 2026-04-21 10:23:20.809 [INFO][6498] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Apr 21 10:23:20.835630 containerd[1975]: time="2026-04-21T10:23:20.832761817Z" level=info msg="TearDown network for sandbox \"c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f\" successfully" Apr 21 10:23:20.835630 containerd[1975]: time="2026-04-21T10:23:20.832806483Z" level=info msg="StopPodSandbox for \"c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f\" returns successfully" Apr 21 10:23:21.050620 kubelet[3393]: I0421 10:23:21.050474 3393 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18a66424-432c-43b2-9b85-3b805c5d2979-whisker-ca-bundle\") pod \"18a66424-432c-43b2-9b85-3b805c5d2979\" (UID: \"18a66424-432c-43b2-9b85-3b805c5d2979\") " Apr 21 10:23:21.050620 kubelet[3393]: I0421 10:23:21.050599 3393 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/18a66424-432c-43b2-9b85-3b805c5d2979-nginx-config\") pod \"18a66424-432c-43b2-9b85-3b805c5d2979\" (UID: \"18a66424-432c-43b2-9b85-3b805c5d2979\") " Apr 21 10:23:21.050844 kubelet[3393]: I0421 10:23:21.050642 3393 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/18a66424-432c-43b2-9b85-3b805c5d2979-whisker-backend-key-pair\") pod \"18a66424-432c-43b2-9b85-3b805c5d2979\" (UID: \"18a66424-432c-43b2-9b85-3b805c5d2979\") " Apr 21 10:23:21.050844 kubelet[3393]: I0421 10:23:21.050670 3393 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgwqc\" (UniqueName: \"kubernetes.io/projected/18a66424-432c-43b2-9b85-3b805c5d2979-kube-api-access-xgwqc\") pod \"18a66424-432c-43b2-9b85-3b805c5d2979\" (UID: \"18a66424-432c-43b2-9b85-3b805c5d2979\") " Apr 21 10:23:21.079681 kubelet[3393]: I0421 10:23:21.075502 3393 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18a66424-432c-43b2-9b85-3b805c5d2979-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "18a66424-432c-43b2-9b85-3b805c5d2979" (UID: "18a66424-432c-43b2-9b85-3b805c5d2979"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:23:21.080058 kubelet[3393]: I0421 10:23:21.074139 3393 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18a66424-432c-43b2-9b85-3b805c5d2979-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "18a66424-432c-43b2-9b85-3b805c5d2979" (UID: "18a66424-432c-43b2-9b85-3b805c5d2979"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 21 10:23:21.109874 kubelet[3393]: I0421 10:23:21.109829 3393 scope.go:117] "RemoveContainer" containerID="7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c" Apr 21 10:23:21.121727 kubelet[3393]: I0421 10:23:21.121631 3393 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18a66424-432c-43b2-9b85-3b805c5d2979-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "18a66424-432c-43b2-9b85-3b805c5d2979" (UID: "18a66424-432c-43b2-9b85-3b805c5d2979"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 21 10:23:21.122807 kubelet[3393]: I0421 10:23:21.122630 3393 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18a66424-432c-43b2-9b85-3b805c5d2979-kube-api-access-xgwqc" (OuterVolumeSpecName: "kube-api-access-xgwqc") pod "18a66424-432c-43b2-9b85-3b805c5d2979" (UID: "18a66424-432c-43b2-9b85-3b805c5d2979"). InnerVolumeSpecName "kube-api-access-xgwqc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 21 10:23:21.124405 containerd[1975]: time="2026-04-21T10:23:21.124367902Z" level=info msg="RemoveContainer for \"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c\"" Apr 21 10:23:21.134517 containerd[1975]: time="2026-04-21T10:23:21.134469496Z" level=info msg="RemoveContainer for \"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c\" returns successfully" Apr 21 10:23:21.151724 kubelet[3393]: I0421 10:23:21.151546 3393 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18a66424-432c-43b2-9b85-3b805c5d2979-whisker-ca-bundle\") on node \"ip-172-31-24-37\" DevicePath \"\"" Apr 21 10:23:21.151724 kubelet[3393]: I0421 10:23:21.151586 3393 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/18a66424-432c-43b2-9b85-3b805c5d2979-nginx-config\") on node \"ip-172-31-24-37\" DevicePath \"\"" Apr 21 10:23:21.151724 kubelet[3393]: I0421 10:23:21.151603 3393 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/18a66424-432c-43b2-9b85-3b805c5d2979-whisker-backend-key-pair\") on node \"ip-172-31-24-37\" DevicePath \"\"" Apr 21 10:23:21.151724 kubelet[3393]: I0421 10:23:21.151617 3393 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xgwqc\" (UniqueName: \"kubernetes.io/projected/18a66424-432c-43b2-9b85-3b805c5d2979-kube-api-access-xgwqc\") on node \"ip-172-31-24-37\" DevicePath \"\"" Apr 21 10:23:21.162991 kubelet[3393]: I0421 10:23:21.162830 3393 scope.go:117] "RemoveContainer" containerID="71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed" Apr 21 10:23:21.168730 containerd[1975]: time="2026-04-21T10:23:21.168598665Z" level=info msg="RemoveContainer for \"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed\"" Apr 21 10:23:21.178369 containerd[1975]: time="2026-04-21T10:23:21.178322743Z" level=info msg="RemoveContainer for \"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed\" returns successfully" Apr 21 10:23:21.178762 kubelet[3393]: I0421 10:23:21.178734 3393 scope.go:117] "RemoveContainer" containerID="7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c" Apr 21 10:23:21.207191 systemd[1]: run-netns-cni\x2da592b1dc\x2d62ee\x2daeb1\x2df5c8\x2daaec0e3ccb71.mount: Deactivated successfully. Apr 21 10:23:21.207353 systemd[1]: var-lib-kubelet-pods-18a66424\x2d432c\x2d43b2\x2d9b85\x2d3b805c5d2979-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxgwqc.mount: Deactivated successfully. Apr 21 10:23:21.207449 systemd[1]: var-lib-kubelet-pods-18a66424\x2d432c\x2d43b2\x2d9b85\x2d3b805c5d2979-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 21 10:23:21.220205 containerd[1975]: time="2026-04-21T10:23:21.211560527Z" level=error msg="ContainerStatus for \"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c\": not found" Apr 21 10:23:21.220865 kubelet[3393]: E0421 10:23:21.220821 3393 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c\": not found" containerID="7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c" Apr 21 10:23:21.256597 kubelet[3393]: I0421 10:23:21.230129 3393 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c"} err="failed to get container status \"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c\": not found" Apr 21 10:23:21.256597 kubelet[3393]: I0421 10:23:21.256459 3393 scope.go:117] "RemoveContainer" containerID="71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed" Apr 21 10:23:21.256853 containerd[1975]: time="2026-04-21T10:23:21.256802396Z" level=error msg="ContainerStatus for \"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed\": not found" Apr 21 10:23:21.257019 kubelet[3393]: E0421 10:23:21.256990 3393 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed\": not found" containerID="71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed" Apr 21 10:23:21.257098 kubelet[3393]: I0421 10:23:21.257029 3393 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed"} err="failed to get container status \"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed\": not found" Apr 21 10:23:21.257098 kubelet[3393]: I0421 10:23:21.257055 3393 scope.go:117] "RemoveContainer" containerID="7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c" Apr 21 10:23:21.257779 containerd[1975]: time="2026-04-21T10:23:21.257736698Z" level=error msg="ContainerStatus for \"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c\": not found" Apr 21 10:23:21.257927 kubelet[3393]: I0421 10:23:21.257899 3393 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c"} err="failed to get container status \"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"7239feba7faa820406dfebc989d784ecb6e1ecf42695f599a74d88a444049e7c\": not found" Apr 21 10:23:21.258004 kubelet[3393]: I0421 10:23:21.257931 3393 scope.go:117] "RemoveContainer" containerID="71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed" Apr 21 10:23:21.258905 containerd[1975]: time="2026-04-21T10:23:21.258864871Z" level=error msg="ContainerStatus for \"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed\": not found" Apr 21 10:23:21.259116 kubelet[3393]: I0421 10:23:21.259092 3393 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed"} err="failed to get container status \"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed\": rpc error: code = NotFound desc = an error occurred when try to find container \"71164db084bf6b0445fe4679eae77990b8374ff1549a8eed2d8a7379d4fd78ed\": not found" Apr 21 10:23:21.432619 systemd[1]: Removed slice kubepods-besteffort-pod18a66424_432c_43b2_9b85_3b805c5d2979.slice - libcontainer container kubepods-besteffort-pod18a66424_432c_43b2_9b85_3b805c5d2979.slice. Apr 21 10:23:21.487016 containerd[1975]: time="2026-04-21T10:23:21.486942224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:23:21.491891 containerd[1975]: time="2026-04-21T10:23:21.491409161Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 21 10:23:21.499502 containerd[1975]: time="2026-04-21T10:23:21.495272375Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:23:21.505284 containerd[1975]: time="2026-04-21T10:23:21.505119848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 21 10:23:21.509607 containerd[1975]: time="2026-04-21T10:23:21.509431966Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.215495222s" Apr 21 10:23:21.510087 containerd[1975]: time="2026-04-21T10:23:21.509990417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 21 10:23:21.555842 containerd[1975]: time="2026-04-21T10:23:21.555779707Z" level=info msg="CreateContainer within sandbox \"eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 21 10:23:21.622670 containerd[1975]: time="2026-04-21T10:23:21.621855765Z" level=info msg="CreateContainer within sandbox \"eaccc08bbb735af839873916fd3babe3d823d43032bde4982d4a5ba3efc5cc31\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"81b18dcfef4be300a9351161240d90cf4dffa0e6c68201b82b96ea695534e6a0\"" Apr 21 10:23:21.626985 containerd[1975]: time="2026-04-21T10:23:21.626721374Z" level=info msg="StartContainer for \"81b18dcfef4be300a9351161240d90cf4dffa0e6c68201b82b96ea695534e6a0\"" Apr 21 10:23:21.629452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1878398560.mount: Deactivated successfully. Apr 21 10:23:21.816813 systemd[1]: Started cri-containerd-81b18dcfef4be300a9351161240d90cf4dffa0e6c68201b82b96ea695534e6a0.scope - libcontainer container 81b18dcfef4be300a9351161240d90cf4dffa0e6c68201b82b96ea695534e6a0. Apr 21 10:23:21.915545 containerd[1975]: time="2026-04-21T10:23:21.915368799Z" level=info msg="StartContainer for \"81b18dcfef4be300a9351161240d90cf4dffa0e6c68201b82b96ea695534e6a0\" returns successfully" Apr 21 10:23:21.980288 kubelet[3393]: I0421 10:23:21.979874 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f853702d-74b8-4f4b-8d30-f369126e8467-whisker-backend-key-pair\") pod \"whisker-5545b8bffd-q642t\" (UID: \"f853702d-74b8-4f4b-8d30-f369126e8467\") " pod="calico-system/whisker-5545b8bffd-q642t" Apr 21 10:23:21.980288 kubelet[3393]: I0421 10:23:21.979918 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/f853702d-74b8-4f4b-8d30-f369126e8467-nginx-config\") pod \"whisker-5545b8bffd-q642t\" (UID: \"f853702d-74b8-4f4b-8d30-f369126e8467\") " pod="calico-system/whisker-5545b8bffd-q642t" Apr 21 10:23:21.980288 kubelet[3393]: I0421 10:23:21.979968 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f853702d-74b8-4f4b-8d30-f369126e8467-whisker-ca-bundle\") pod \"whisker-5545b8bffd-q642t\" (UID: \"f853702d-74b8-4f4b-8d30-f369126e8467\") " pod="calico-system/whisker-5545b8bffd-q642t" Apr 21 10:23:21.980288 kubelet[3393]: I0421 10:23:21.979999 3393 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59pcm\" (UniqueName: \"kubernetes.io/projected/f853702d-74b8-4f4b-8d30-f369126e8467-kube-api-access-59pcm\") pod \"whisker-5545b8bffd-q642t\" (UID: \"f853702d-74b8-4f4b-8d30-f369126e8467\") " pod="calico-system/whisker-5545b8bffd-q642t" Apr 21 10:23:22.011690 systemd[1]: Created slice kubepods-besteffort-podf853702d_74b8_4f4b_8d30_f369126e8467.slice - libcontainer container kubepods-besteffort-podf853702d_74b8_4f4b_8d30_f369126e8467.slice. Apr 21 10:23:22.242671 kubelet[3393]: I0421 10:23:22.241875 3393 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18a66424-432c-43b2-9b85-3b805c5d2979" path="/var/lib/kubelet/pods/18a66424-432c-43b2-9b85-3b805c5d2979/volumes" Apr 21 10:23:22.321879 containerd[1975]: time="2026-04-21T10:23:22.321815883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5545b8bffd-q642t,Uid:f853702d-74b8-4f4b-8d30-f369126e8467,Namespace:calico-system,Attempt:0,}" Apr 21 10:23:22.675456 kubelet[3393]: I0421 10:23:22.675413 3393 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 21 10:23:22.679848 kubelet[3393]: I0421 10:23:22.679793 3393 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 21 10:23:22.735569 systemd-networkd[1897]: calie8a712b866e: Link UP Apr 21 10:23:22.739883 systemd-networkd[1897]: calie8a712b866e: Gained carrier Apr 21 10:23:22.753175 (udev-worker)[6511]: Network interface NamePolicy= disabled on kernel command line. Apr 21 10:23:22.772484 kubelet[3393]: I0421 10:23:22.772412 3393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5kwvj" podStartSLOduration=29.240109476 podStartE2EDuration="53.772392886s" podCreationTimestamp="2026-04-21 10:22:29 +0000 UTC" firstStartedPulling="2026-04-21 10:22:56.979745241 +0000 UTC m=+44.956700314" lastFinishedPulling="2026-04-21 10:23:21.512028651 +0000 UTC m=+69.488983724" observedRunningTime="2026-04-21 10:23:22.205941629 +0000 UTC m=+70.182896725" watchObservedRunningTime="2026-04-21 10:23:22.772392886 +0000 UTC m=+70.749347994" Apr 21 10:23:22.789623 containerd[1975]: 2026-04-21 10:23:22.525 [INFO][6590] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--37-k8s-whisker--5545b8bffd--q642t-eth0 whisker-5545b8bffd- calico-system f853702d-74b8-4f4b-8d30-f369126e8467 1165 0 2026-04-21 10:23:21 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5545b8bffd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-24-37 whisker-5545b8bffd-q642t eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calie8a712b866e [] [] }} ContainerID="0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210" Namespace="calico-system" Pod="whisker-5545b8bffd-q642t" WorkloadEndpoint="ip--172--31--24--37-k8s-whisker--5545b8bffd--q642t-" Apr 21 10:23:22.789623 containerd[1975]: 2026-04-21 10:23:22.526 [INFO][6590] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210" Namespace="calico-system" Pod="whisker-5545b8bffd-q642t" WorkloadEndpoint="ip--172--31--24--37-k8s-whisker--5545b8bffd--q642t-eth0" Apr 21 10:23:22.789623 containerd[1975]: 2026-04-21 10:23:22.603 [INFO][6598] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210" HandleID="k8s-pod-network.0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210" Workload="ip--172--31--24--37-k8s-whisker--5545b8bffd--q642t-eth0" Apr 21 10:23:22.789623 containerd[1975]: 2026-04-21 10:23:22.619 [INFO][6598] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210" HandleID="k8s-pod-network.0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210" Workload="ip--172--31--24--37-k8s-whisker--5545b8bffd--q642t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000261c60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-37", "pod":"whisker-5545b8bffd-q642t", "timestamp":"2026-04-21 10:23:22.603504776 +0000 UTC"}, Hostname:"ip-172-31-24-37", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000344420)} Apr 21 10:23:22.789623 containerd[1975]: 2026-04-21 10:23:22.619 [INFO][6598] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:23:22.789623 containerd[1975]: 2026-04-21 10:23:22.619 [INFO][6598] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:23:22.789623 containerd[1975]: 2026-04-21 10:23:22.619 [INFO][6598] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-37' Apr 21 10:23:22.789623 containerd[1975]: 2026-04-21 10:23:22.628 [INFO][6598] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210" host="ip-172-31-24-37" Apr 21 10:23:22.789623 containerd[1975]: 2026-04-21 10:23:22.645 [INFO][6598] ipam/ipam.go 409: Looking up existing affinities for host host="ip-172-31-24-37" Apr 21 10:23:22.789623 containerd[1975]: 2026-04-21 10:23:22.658 [INFO][6598] ipam/ipam.go 526: Trying affinity for 192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:23:22.789623 containerd[1975]: 2026-04-21 10:23:22.663 [INFO][6598] ipam/ipam.go 160: Attempting to load block cidr=192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:23:22.789623 containerd[1975]: 2026-04-21 10:23:22.669 [INFO][6598] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.51.0/26 host="ip-172-31-24-37" Apr 21 10:23:22.789623 containerd[1975]: 2026-04-21 10:23:22.669 [INFO][6598] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.51.0/26 handle="k8s-pod-network.0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210" host="ip-172-31-24-37" Apr 21 10:23:22.789623 containerd[1975]: 2026-04-21 10:23:22.673 [INFO][6598] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210 Apr 21 10:23:22.789623 containerd[1975]: 2026-04-21 10:23:22.684 [INFO][6598] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.51.0/26 handle="k8s-pod-network.0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210" host="ip-172-31-24-37" Apr 21 10:23:22.789623 containerd[1975]: 2026-04-21 10:23:22.701 [INFO][6598] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.51.9/26] block=192.168.51.0/26 handle="k8s-pod-network.0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210" host="ip-172-31-24-37" Apr 21 10:23:22.789623 containerd[1975]: 2026-04-21 10:23:22.701 [INFO][6598] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.51.9/26] handle="k8s-pod-network.0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210" host="ip-172-31-24-37" Apr 21 10:23:22.789623 containerd[1975]: 2026-04-21 10:23:22.702 [INFO][6598] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:23:22.789623 containerd[1975]: 2026-04-21 10:23:22.702 [INFO][6598] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.51.9/26] IPv6=[] ContainerID="0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210" HandleID="k8s-pod-network.0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210" Workload="ip--172--31--24--37-k8s-whisker--5545b8bffd--q642t-eth0" Apr 21 10:23:22.792587 containerd[1975]: 2026-04-21 10:23:22.715 [INFO][6590] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210" Namespace="calico-system" Pod="whisker-5545b8bffd-q642t" WorkloadEndpoint="ip--172--31--24--37-k8s-whisker--5545b8bffd--q642t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-whisker--5545b8bffd--q642t-eth0", GenerateName:"whisker-5545b8bffd-", Namespace:"calico-system", SelfLink:"", UID:"f853702d-74b8-4f4b-8d30-f369126e8467", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 23, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5545b8bffd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"", Pod:"whisker-5545b8bffd-q642t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie8a712b866e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:22.792587 containerd[1975]: 2026-04-21 10:23:22.715 [INFO][6590] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.51.9/32] ContainerID="0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210" Namespace="calico-system" Pod="whisker-5545b8bffd-q642t" WorkloadEndpoint="ip--172--31--24--37-k8s-whisker--5545b8bffd--q642t-eth0" Apr 21 10:23:22.792587 containerd[1975]: 2026-04-21 10:23:22.715 [INFO][6590] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8a712b866e ContainerID="0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210" Namespace="calico-system" Pod="whisker-5545b8bffd-q642t" WorkloadEndpoint="ip--172--31--24--37-k8s-whisker--5545b8bffd--q642t-eth0" Apr 21 10:23:22.792587 containerd[1975]: 2026-04-21 10:23:22.740 [INFO][6590] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210" Namespace="calico-system" Pod="whisker-5545b8bffd-q642t" WorkloadEndpoint="ip--172--31--24--37-k8s-whisker--5545b8bffd--q642t-eth0" Apr 21 10:23:22.792587 containerd[1975]: 2026-04-21 10:23:22.741 [INFO][6590] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210" Namespace="calico-system" Pod="whisker-5545b8bffd-q642t" WorkloadEndpoint="ip--172--31--24--37-k8s-whisker--5545b8bffd--q642t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--37-k8s-whisker--5545b8bffd--q642t-eth0", GenerateName:"whisker-5545b8bffd-", Namespace:"calico-system", SelfLink:"", UID:"f853702d-74b8-4f4b-8d30-f369126e8467", ResourceVersion:"1165", Generation:0, CreationTimestamp:time.Date(2026, time.April, 21, 10, 23, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5545b8bffd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-37", ContainerID:"0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210", Pod:"whisker-5545b8bffd-q642t", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.51.9/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calie8a712b866e", MAC:"d2:de:7a:ad:8b:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 21 10:23:22.792587 containerd[1975]: 2026-04-21 10:23:22.779 [INFO][6590] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210" Namespace="calico-system" Pod="whisker-5545b8bffd-q642t" WorkloadEndpoint="ip--172--31--24--37-k8s-whisker--5545b8bffd--q642t-eth0" Apr 21 10:23:23.064007 containerd[1975]: time="2026-04-21T10:23:23.053340035Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 21 10:23:23.064789 containerd[1975]: time="2026-04-21T10:23:23.063989055Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 21 10:23:23.064929 containerd[1975]: time="2026-04-21T10:23:23.064774802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:23:23.065180 containerd[1975]: time="2026-04-21T10:23:23.065101202Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 21 10:23:23.103775 systemd[1]: Started cri-containerd-0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210.scope - libcontainer container 0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210. Apr 21 10:23:23.164441 sshd[6356]: pam_unix(sshd:session): session closed for user core Apr 21 10:23:23.182270 systemd[1]: sshd@8-172.31.24.37:22-50.85.169.122:49746.service: Deactivated successfully. Apr 21 10:23:23.183219 systemd-logind[1955]: Session 9 logged out. Waiting for processes to exit. Apr 21 10:23:23.186261 systemd[1]: session-9.scope: Deactivated successfully. Apr 21 10:23:23.192675 systemd-logind[1955]: Removed session 9. Apr 21 10:23:23.216152 containerd[1975]: time="2026-04-21T10:23:23.216084053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5545b8bffd-q642t,Uid:f853702d-74b8-4f4b-8d30-f369126e8467,Namespace:calico-system,Attempt:0,} returns sandbox id \"0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210\"" Apr 21 10:23:23.285134 containerd[1975]: time="2026-04-21T10:23:23.284805920Z" level=info msg="CreateContainer within sandbox \"0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 21 10:23:23.321757 containerd[1975]: time="2026-04-21T10:23:23.321651815Z" level=info msg="CreateContainer within sandbox \"0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"f0e1aa5e090b286ebf7d3f6633b315b79b36ee37eb0846e4e45a0d143d61beb3\"" Apr 21 10:23:23.323692 containerd[1975]: time="2026-04-21T10:23:23.323662874Z" level=info msg="StartContainer for \"f0e1aa5e090b286ebf7d3f6633b315b79b36ee37eb0846e4e45a0d143d61beb3\"" Apr 21 10:23:23.383292 systemd[1]: Started cri-containerd-f0e1aa5e090b286ebf7d3f6633b315b79b36ee37eb0846e4e45a0d143d61beb3.scope - libcontainer container f0e1aa5e090b286ebf7d3f6633b315b79b36ee37eb0846e4e45a0d143d61beb3. Apr 21 10:23:23.444988 containerd[1975]: time="2026-04-21T10:23:23.444945184Z" level=info msg="StartContainer for \"f0e1aa5e090b286ebf7d3f6633b315b79b36ee37eb0846e4e45a0d143d61beb3\" returns successfully" Apr 21 10:23:23.455719 containerd[1975]: time="2026-04-21T10:23:23.455673860Z" level=info msg="CreateContainer within sandbox \"0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 21 10:23:23.484372 containerd[1975]: time="2026-04-21T10:23:23.484229605Z" level=info msg="CreateContainer within sandbox \"0593618ba56c464789b08e4f3be9e542ee122375d487ef5435e65164fc843210\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"746bc154f7ef15ea9adbc339f6e66bb6131b7065311cb053ed9abe635bb84233\"" Apr 21 10:23:23.486742 containerd[1975]: time="2026-04-21T10:23:23.486701676Z" level=info msg="StartContainer for \"746bc154f7ef15ea9adbc339f6e66bb6131b7065311cb053ed9abe635bb84233\"" Apr 21 10:23:23.527906 systemd[1]: Started cri-containerd-746bc154f7ef15ea9adbc339f6e66bb6131b7065311cb053ed9abe635bb84233.scope - libcontainer container 746bc154f7ef15ea9adbc339f6e66bb6131b7065311cb053ed9abe635bb84233. Apr 21 10:23:23.597332 containerd[1975]: time="2026-04-21T10:23:23.597145984Z" level=info msg="StartContainer for \"746bc154f7ef15ea9adbc339f6e66bb6131b7065311cb053ed9abe635bb84233\" returns successfully" Apr 21 10:23:24.220545 kubelet[3393]: I0421 10:23:24.220145 3393 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5545b8bffd-q642t" podStartSLOduration=3.220121341 podStartE2EDuration="3.220121341s" podCreationTimestamp="2026-04-21 10:23:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-21 10:23:24.217962622 +0000 UTC m=+72.194917716" watchObservedRunningTime="2026-04-21 10:23:24.220121341 +0000 UTC m=+72.197076432" Apr 21 10:23:24.313070 systemd-networkd[1897]: calie8a712b866e: Gained IPv6LL Apr 21 10:23:26.874407 ntpd[1949]: Listen normally on 18 calie8a712b866e [fe80::ecee:eeff:feee:eeee%15]:123 Apr 21 10:23:26.874465 ntpd[1949]: Deleting interface #10 cali2e9ca1d4f02, fe80::ecee:eeff:feee:eeee%5#123, interface stats: received=0, sent=0, dropped=0, active_time=24 secs Apr 21 10:23:26.875589 ntpd[1949]: 21 Apr 10:23:26 ntpd[1949]: Listen normally on 18 calie8a712b866e [fe80::ecee:eeff:feee:eeee%15]:123 Apr 21 10:23:26.875589 ntpd[1949]: 21 Apr 10:23:26 ntpd[1949]: Deleting interface #10 cali2e9ca1d4f02, fe80::ecee:eeff:feee:eeee%5#123, interface stats: received=0, sent=0, dropped=0, active_time=24 secs Apr 21 10:23:28.357866 systemd[1]: Started sshd@9-172.31.24.37:22-50.85.169.122:55234.service - OpenSSH per-connection server daemon (50.85.169.122:55234). Apr 21 10:23:29.483199 sshd[6780]: Accepted publickey for core from 50.85.169.122 port 55234 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:23:29.490342 sshd[6780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:23:29.503486 systemd-logind[1955]: New session 10 of user core. Apr 21 10:23:29.506760 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 21 10:23:31.107911 sshd[6780]: pam_unix(sshd:session): session closed for user core Apr 21 10:23:31.112871 systemd[1]: sshd@9-172.31.24.37:22-50.85.169.122:55234.service: Deactivated successfully. Apr 21 10:23:31.115799 systemd[1]: session-10.scope: Deactivated successfully. Apr 21 10:23:31.117688 systemd-logind[1955]: Session 10 logged out. Waiting for processes to exit. Apr 21 10:23:31.121569 systemd-logind[1955]: Removed session 10. Apr 21 10:23:36.289291 systemd[1]: Started sshd@10-172.31.24.37:22-50.85.169.122:34098.service - OpenSSH per-connection server daemon (50.85.169.122:34098). Apr 21 10:23:37.382329 sshd[6825]: Accepted publickey for core from 50.85.169.122 port 34098 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:23:37.387648 sshd[6825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:23:37.394327 systemd-logind[1955]: New session 11 of user core. Apr 21 10:23:37.400800 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 21 10:23:38.487919 sshd[6825]: pam_unix(sshd:session): session closed for user core Apr 21 10:23:38.493062 systemd-logind[1955]: Session 11 logged out. Waiting for processes to exit. Apr 21 10:23:38.494364 systemd[1]: sshd@10-172.31.24.37:22-50.85.169.122:34098.service: Deactivated successfully. Apr 21 10:23:38.496438 systemd[1]: session-11.scope: Deactivated successfully. Apr 21 10:23:38.497976 systemd-logind[1955]: Removed session 11. Apr 21 10:23:38.665920 systemd[1]: Started sshd@11-172.31.24.37:22-50.85.169.122:34104.service - OpenSSH per-connection server daemon (50.85.169.122:34104). Apr 21 10:23:39.728469 sshd[6839]: Accepted publickey for core from 50.85.169.122 port 34104 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:23:39.729229 sshd[6839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:23:39.736036 systemd-logind[1955]: New session 12 of user core. Apr 21 10:23:39.740743 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 21 10:23:40.625731 sshd[6839]: pam_unix(sshd:session): session closed for user core Apr 21 10:23:40.635039 systemd[1]: sshd@11-172.31.24.37:22-50.85.169.122:34104.service: Deactivated successfully. Apr 21 10:23:40.635304 systemd-logind[1955]: Session 12 logged out. Waiting for processes to exit. Apr 21 10:23:40.639229 systemd[1]: session-12.scope: Deactivated successfully. Apr 21 10:23:40.640950 systemd-logind[1955]: Removed session 12. Apr 21 10:23:40.792876 systemd[1]: Started sshd@12-172.31.24.37:22-50.85.169.122:59218.service - OpenSSH per-connection server daemon (50.85.169.122:59218). Apr 21 10:23:41.807147 sshd[6860]: Accepted publickey for core from 50.85.169.122 port 59218 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:23:41.807835 sshd[6860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:23:41.812968 systemd-logind[1955]: New session 13 of user core. Apr 21 10:23:41.819858 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 21 10:23:42.599577 sshd[6860]: pam_unix(sshd:session): session closed for user core Apr 21 10:23:42.608218 systemd[1]: sshd@12-172.31.24.37:22-50.85.169.122:59218.service: Deactivated successfully. Apr 21 10:23:42.611989 systemd[1]: session-13.scope: Deactivated successfully. Apr 21 10:23:42.612881 systemd-logind[1955]: Session 13 logged out. Waiting for processes to exit. Apr 21 10:23:42.616266 systemd-logind[1955]: Removed session 13. Apr 21 10:23:44.319599 kubelet[3393]: I0421 10:23:44.319556 3393 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 21 10:23:47.787073 systemd[1]: Started sshd@13-172.31.24.37:22-50.85.169.122:59234.service - OpenSSH per-connection server daemon (50.85.169.122:59234). Apr 21 10:23:48.911571 sshd[6907]: Accepted publickey for core from 50.85.169.122 port 59234 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:23:48.915843 sshd[6907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:23:48.921539 systemd-logind[1955]: New session 14 of user core. Apr 21 10:23:48.928809 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 21 10:23:50.360031 sshd[6907]: pam_unix(sshd:session): session closed for user core Apr 21 10:23:50.365629 systemd-logind[1955]: Session 14 logged out. Waiting for processes to exit. Apr 21 10:23:50.366176 systemd[1]: sshd@13-172.31.24.37:22-50.85.169.122:59234.service: Deactivated successfully. Apr 21 10:23:50.369089 systemd[1]: session-14.scope: Deactivated successfully. Apr 21 10:23:50.370972 systemd-logind[1955]: Removed session 14. Apr 21 10:23:50.550310 systemd[1]: Started sshd@14-172.31.24.37:22-50.85.169.122:41112.service - OpenSSH per-connection server daemon (50.85.169.122:41112). Apr 21 10:23:51.580301 sshd[6946]: Accepted publickey for core from 50.85.169.122 port 41112 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:23:51.582450 sshd[6946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:23:51.587414 systemd-logind[1955]: New session 15 of user core. Apr 21 10:23:51.593893 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 21 10:23:52.871202 sshd[6946]: pam_unix(sshd:session): session closed for user core Apr 21 10:23:52.876242 systemd[1]: sshd@14-172.31.24.37:22-50.85.169.122:41112.service: Deactivated successfully. Apr 21 10:23:52.879348 systemd[1]: session-15.scope: Deactivated successfully. Apr 21 10:23:52.880246 systemd-logind[1955]: Session 15 logged out. Waiting for processes to exit. Apr 21 10:23:52.881618 systemd-logind[1955]: Removed session 15. Apr 21 10:23:53.045981 systemd[1]: Started sshd@15-172.31.24.37:22-50.85.169.122:41122.service - OpenSSH per-connection server daemon (50.85.169.122:41122). Apr 21 10:23:54.076616 sshd[6957]: Accepted publickey for core from 50.85.169.122 port 41122 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:23:54.078393 sshd[6957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:23:54.083620 systemd-logind[1955]: New session 16 of user core. Apr 21 10:23:54.088759 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 21 10:23:54.811636 systemd[1]: run-containerd-runc-k8s.io-d2686ca1f75b195969bd5d21b6dfdae469c9f5bd8646fd098bad8ccb0091bd11-runc.njp7s5.mount: Deactivated successfully. Apr 21 10:23:55.542297 sshd[6957]: pam_unix(sshd:session): session closed for user core Apr 21 10:23:55.558724 systemd[1]: sshd@15-172.31.24.37:22-50.85.169.122:41122.service: Deactivated successfully. Apr 21 10:23:55.561720 systemd[1]: session-16.scope: Deactivated successfully. Apr 21 10:23:55.563092 systemd-logind[1955]: Session 16 logged out. Waiting for processes to exit. Apr 21 10:23:55.564926 systemd-logind[1955]: Removed session 16. Apr 21 10:23:55.707401 systemd[1]: Started sshd@16-172.31.24.37:22-50.85.169.122:41136.service - OpenSSH per-connection server daemon (50.85.169.122:41136). Apr 21 10:23:56.762894 sshd[7005]: Accepted publickey for core from 50.85.169.122 port 41136 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:23:56.768173 sshd[7005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:23:56.773956 systemd-logind[1955]: New session 17 of user core. Apr 21 10:23:56.780787 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 21 10:23:58.291886 sshd[7005]: pam_unix(sshd:session): session closed for user core Apr 21 10:23:58.296829 systemd-logind[1955]: Session 17 logged out. Waiting for processes to exit. Apr 21 10:23:58.297541 systemd[1]: sshd@16-172.31.24.37:22-50.85.169.122:41136.service: Deactivated successfully. Apr 21 10:23:58.300889 systemd[1]: session-17.scope: Deactivated successfully. Apr 21 10:23:58.302109 systemd-logind[1955]: Removed session 17. Apr 21 10:23:58.468931 systemd[1]: Started sshd@17-172.31.24.37:22-50.85.169.122:41152.service - OpenSSH per-connection server daemon (50.85.169.122:41152). Apr 21 10:23:59.494879 sshd[7016]: Accepted publickey for core from 50.85.169.122 port 41152 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:23:59.495640 sshd[7016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:23:59.502697 systemd-logind[1955]: New session 18 of user core. Apr 21 10:23:59.506733 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 21 10:24:00.345445 sshd[7016]: pam_unix(sshd:session): session closed for user core Apr 21 10:24:00.351717 systemd-logind[1955]: Session 18 logged out. Waiting for processes to exit. Apr 21 10:24:00.352387 systemd[1]: sshd@17-172.31.24.37:22-50.85.169.122:41152.service: Deactivated successfully. Apr 21 10:24:00.356329 systemd[1]: session-18.scope: Deactivated successfully. Apr 21 10:24:00.357930 systemd-logind[1955]: Removed session 18. Apr 21 10:24:03.115060 systemd[1]: run-containerd-runc-k8s.io-b4374ae899ed982787b865ecbd8bd3eb7f94dd14de7ab3aced8a0f809587eefd-runc.8V81C4.mount: Deactivated successfully. Apr 21 10:24:05.519440 systemd[1]: Started sshd@18-172.31.24.37:22-50.85.169.122:55080.service - OpenSSH per-connection server daemon (50.85.169.122:55080). Apr 21 10:24:06.568676 sshd[7076]: Accepted publickey for core from 50.85.169.122 port 55080 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:24:06.570977 sshd[7076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:24:06.580363 systemd-logind[1955]: New session 19 of user core. Apr 21 10:24:06.588789 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 21 10:24:07.430335 sshd[7076]: pam_unix(sshd:session): session closed for user core Apr 21 10:24:07.434689 systemd[1]: sshd@18-172.31.24.37:22-50.85.169.122:55080.service: Deactivated successfully. Apr 21 10:24:07.437600 systemd[1]: session-19.scope: Deactivated successfully. Apr 21 10:24:07.439316 systemd-logind[1955]: Session 19 logged out. Waiting for processes to exit. Apr 21 10:24:07.440661 systemd-logind[1955]: Removed session 19. Apr 21 10:24:12.616964 systemd[1]: Started sshd@19-172.31.24.37:22-50.85.169.122:60192.service - OpenSSH per-connection server daemon (50.85.169.122:60192). Apr 21 10:24:13.674189 sshd[7092]: Accepted publickey for core from 50.85.169.122 port 60192 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:24:13.677778 sshd[7092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:24:13.683628 systemd-logind[1955]: New session 20 of user core. Apr 21 10:24:13.689798 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 21 10:24:14.565809 sshd[7092]: pam_unix(sshd:session): session closed for user core Apr 21 10:24:14.569329 systemd[1]: sshd@19-172.31.24.37:22-50.85.169.122:60192.service: Deactivated successfully. Apr 21 10:24:14.571901 systemd[1]: session-20.scope: Deactivated successfully. Apr 21 10:24:14.573893 systemd-logind[1955]: Session 20 logged out. Waiting for processes to exit. Apr 21 10:24:14.575259 systemd-logind[1955]: Removed session 20. Apr 21 10:24:18.464720 containerd[1975]: time="2026-04-21T10:24:18.444012835Z" level=info msg="StopPodSandbox for \"c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f\"" Apr 21 10:24:19.433282 containerd[1975]: 2026-04-21 10:24:19.015 [WARNING][7134] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" WorkloadEndpoint="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:24:19.433282 containerd[1975]: 2026-04-21 10:24:19.019 [INFO][7134] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Apr 21 10:24:19.433282 containerd[1975]: 2026-04-21 10:24:19.019 [INFO][7134] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" iface="eth0" netns="" Apr 21 10:24:19.433282 containerd[1975]: 2026-04-21 10:24:19.019 [INFO][7134] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Apr 21 10:24:19.433282 containerd[1975]: 2026-04-21 10:24:19.019 [INFO][7134] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Apr 21 10:24:19.433282 containerd[1975]: 2026-04-21 10:24:19.406 [INFO][7141] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" HandleID="k8s-pod-network.c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:24:19.433282 containerd[1975]: 2026-04-21 10:24:19.410 [INFO][7141] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:24:19.433282 containerd[1975]: 2026-04-21 10:24:19.410 [INFO][7141] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:24:19.433282 containerd[1975]: 2026-04-21 10:24:19.426 [WARNING][7141] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" HandleID="k8s-pod-network.c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:24:19.433282 containerd[1975]: 2026-04-21 10:24:19.426 [INFO][7141] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" HandleID="k8s-pod-network.c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:24:19.433282 containerd[1975]: 2026-04-21 10:24:19.428 [INFO][7141] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:24:19.433282 containerd[1975]: 2026-04-21 10:24:19.430 [INFO][7134] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Apr 21 10:24:19.438886 containerd[1975]: time="2026-04-21T10:24:19.438820800Z" level=info msg="TearDown network for sandbox \"c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f\" successfully" Apr 21 10:24:19.438886 containerd[1975]: time="2026-04-21T10:24:19.438882734Z" level=info msg="StopPodSandbox for \"c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f\" returns successfully" Apr 21 10:24:19.463463 containerd[1975]: time="2026-04-21T10:24:19.463361152Z" level=info msg="RemovePodSandbox for \"c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f\"" Apr 21 10:24:19.469245 containerd[1975]: time="2026-04-21T10:24:19.469186746Z" level=info msg="Forcibly stopping sandbox \"c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f\"" Apr 21 10:24:19.618863 containerd[1975]: 2026-04-21 10:24:19.570 [WARNING][7166] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" WorkloadEndpoint="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:24:19.618863 containerd[1975]: 2026-04-21 10:24:19.571 [INFO][7166] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Apr 21 10:24:19.618863 containerd[1975]: 2026-04-21 10:24:19.571 [INFO][7166] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" iface="eth0" netns="" Apr 21 10:24:19.618863 containerd[1975]: 2026-04-21 10:24:19.571 [INFO][7166] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Apr 21 10:24:19.618863 containerd[1975]: 2026-04-21 10:24:19.571 [INFO][7166] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Apr 21 10:24:19.618863 containerd[1975]: 2026-04-21 10:24:19.598 [INFO][7174] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" HandleID="k8s-pod-network.c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:24:19.618863 containerd[1975]: 2026-04-21 10:24:19.598 [INFO][7174] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 21 10:24:19.618863 containerd[1975]: 2026-04-21 10:24:19.598 [INFO][7174] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 21 10:24:19.618863 containerd[1975]: 2026-04-21 10:24:19.609 [WARNING][7174] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" HandleID="k8s-pod-network.c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:24:19.618863 containerd[1975]: 2026-04-21 10:24:19.609 [INFO][7174] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" HandleID="k8s-pod-network.c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Workload="ip--172--31--24--37-k8s-whisker--54c9c74cc--nh8pr-eth0" Apr 21 10:24:19.618863 containerd[1975]: 2026-04-21 10:24:19.611 [INFO][7174] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 21 10:24:19.618863 containerd[1975]: 2026-04-21 10:24:19.615 [INFO][7166] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f" Apr 21 10:24:19.619399 containerd[1975]: time="2026-04-21T10:24:19.618911143Z" level=info msg="TearDown network for sandbox \"c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f\" successfully" Apr 21 10:24:19.733002 containerd[1975]: time="2026-04-21T10:24:19.732812991Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 21 10:24:19.733232 containerd[1975]: time="2026-04-21T10:24:19.733206505Z" level=info msg="RemovePodSandbox \"c4aaadcfbc646979a7cd7c5e619123cf8c0acd6ab0c767e56a2fb7a77295477f\" returns successfully" Apr 21 10:24:19.750823 systemd[1]: Started sshd@20-172.31.24.37:22-50.85.169.122:33992.service - OpenSSH per-connection server daemon (50.85.169.122:33992). Apr 21 10:24:20.892220 sshd[7181]: Accepted publickey for core from 50.85.169.122 port 33992 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:24:20.896653 sshd[7181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:24:20.902487 systemd-logind[1955]: New session 21 of user core. Apr 21 10:24:20.907756 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 21 10:24:22.789763 sshd[7181]: pam_unix(sshd:session): session closed for user core Apr 21 10:24:22.793574 systemd[1]: sshd@20-172.31.24.37:22-50.85.169.122:33992.service: Deactivated successfully. Apr 21 10:24:22.795908 systemd[1]: session-21.scope: Deactivated successfully. Apr 21 10:24:22.798028 systemd-logind[1955]: Session 21 logged out. Waiting for processes to exit. Apr 21 10:24:22.800082 systemd-logind[1955]: Removed session 21. Apr 21 10:24:27.964161 systemd[1]: Started sshd@21-172.31.24.37:22-50.85.169.122:33996.service - OpenSSH per-connection server daemon (50.85.169.122:33996). Apr 21 10:24:28.990086 sshd[7233]: Accepted publickey for core from 50.85.169.122 port 33996 ssh2: RSA SHA256:TtBVv9Qma6SMs1T9xoa67n+i4tpaX/fC+nfjhDX7hV0 Apr 21 10:24:28.992901 sshd[7233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 21 10:24:28.997843 systemd-logind[1955]: New session 22 of user core. Apr 21 10:24:29.002911 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 21 10:24:30.131515 sshd[7233]: pam_unix(sshd:session): session closed for user core Apr 21 10:24:30.136347 systemd-logind[1955]: Session 22 logged out. Waiting for processes to exit. Apr 21 10:24:30.137579 systemd[1]: sshd@21-172.31.24.37:22-50.85.169.122:33996.service: Deactivated successfully. Apr 21 10:24:30.140222 systemd[1]: session-22.scope: Deactivated successfully. Apr 21 10:24:30.141845 systemd-logind[1955]: Removed session 22.